Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...)

2018-09-20 Thread Samuel Cassiba
On Thu, Sep 20, 2018 at 2:48 PM Doug Hellmann  wrote:
>
> Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +:
> > tl;dr: The openstack, openstack-dev, openstack-sigs and
> > openstack-operators mailing lists (to which this is being sent) will
> > be replaced by a new openstack-disc...@lists.openstack.org mailing
> > list.
>
> Since last week there was some discussion of including the openstack-tc
> mailing list among these lists to eliminate confusion caused by the fact
> that the list is not configured to accept messages from all subscribers
> (it's meant to be used for us to make sure TC members see meeting
> announcements).
>
> I'm inclined to include it and either use a direct mailing or the
> [tc] tag on the new discuss list to reach TC members, but I would
> like to hear feedback from TC members and other interested parties
> before calling that decision made. Please let me know what you think.
>
> Doug
>

+1 including the TC list as a tag makes sense to me and my tangent
about intent in online communities.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] fog-openstack 0.3

2018-09-19 Thread Samuel Cassiba
Ohai!

fog-openstack 0.3 has been released upstream, but it also seems to be
a breaking release by way of naming convention.

At this time, it is advised to pin your client cookbook at '<0.3.0'.
Changes to compensate for this change are being delivered to git and
Supermarket, but the most immediate workaround is to pin.

Once things are working with fog-openstack 0.3, ChefDK will pick the
new version up in a later release.

Thank you for your attention.

-scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-18 Thread Samuel Cassiba
On Tue, Sep 18, 2018 at 5:34 AM Jeremy Stanley  wrote:
>
> On 2018-09-18 10:23:33 +0800 (+0800), Zhipeng Huang wrote:
> [...]
> > Jeremy, what I'm saying here, and also addressed in comments with
> > the related resolution patch, is that personality reasons are the
> > ones that we have to respect and no form of governance change
> > could help solve the problem. However other than that, we could
> > always find a way to address the issue for remedies, if we don't
> > have a good answer now maybe we will have sometime later.
> >
> > Preference on  social tooling is something that the technical
> > committee is able to address, with isolation of usage of
> > proprietary tools for certain scenario and also strict policy on
> > enforcing the open source communication solutions we have today as
> > the central ones the community will continue to use. This is not
> > an unsolvable problem given that we have a technical committee,
> > but personality issues are, no matter what governance instrument
> > we have.
>
> Once again, I think we're talking past each other. I was replying to
> (and quoted from) the provided sample rejection letter. First I
> wanted to point out that I had already rejected the premise earlier
> on this thread even though it was suggested that no rejection had
> yet been provided. Second, the sample letter seemed to indicate what
> I believe to be a fundamental misunderstanding among those pushing
> this issue: the repeated attempts I've seen so far to paint a
> disinterest in participating in wechat interactions as mere
> "personal preference," and the idea that those who hold this
> "preference" are somehow weak or afraid of the people they'll
> encounter there.
>
> For me, it borders on insulting. I (and I believe many others) have
> strong ideological opposition to participating in these forums, not
> mere personal preferences.
> --
> Jeremy Stanley
>

It is incredibly difficult to convey intent over primarily text-based
mediums, of which I primarily interact with individuals I've never
seen in-person. What is my ideological principle, is someone's
personal preference, isn't even a thought to yet another.

I work within other FLOSS projects outside of OpenStack. With some, my
primary interactions take place over Slack, because they made the
conscious choice to hoist their user community to a free instance,
nominating people to an ambassador role for keeping their message
intact on IRC. Other times, it's over GitHub, where the whole
interaction takes place within the one platform.

Within OpenStack, some people I've only ever worked with through code
reviews or bug reports. Others, IRC or email. People are going to
gravitate toward what makes sense for them, but that's where the lines
between ideology and preference blur.

Agreeing to keep the important lines of communication to a certain
medium is the preference here, but it's also the ideological belief.
The debates ongoing are not Wechat versus Twitter versus IRC versus
Slack. It's over keeping the intent of being open, which is defined in
the very namesake.

Many moons ago, Chef OpenStack was advised to actively eschew video
meetings before being approved to being an OpenStack project under the
Big Tent experiment during the rise of the hype. This happened,
despite the active actions for openness and inclusiveness into the
weekly video meetings, because there was no text record to reference.
This, in turn, resulted in fewer and fewer developers being able to
justify having an hour a week to 'mess around' on IRC, and thus the
hastening of the deflationary period. With a video running, it was more
reasonable to being able to justify an hour to a conference room or an
office to further the intent of openness in the community.

I directly see the benefit in having a means to reach the greater
community (hi! o/) but I do not directly see the correlation in
defining a given social platform as being The Platform for Relevant
Communications beyond email or code review. Email and code review are,
by far, the most accessible points around the globe.

For the Horde^Wcode,

Samuel Cassiba (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-17 Thread Samuel Cassiba
On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza  wrote:
>
>
>
> Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley  a écrit :
>>
>> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote:
>> [...]
>> > - What is the problem joining Wechat will solve (keeping in mind the
>> > language barrier)?
>>
>> As I understand it, the suggestion is that mere presence of project
>> leadership in venues where this emerging subset of our community
>> gathers would provide a strong signal that we support them and care
>> about their experience with the software.
>>
>> > - Isn't this problem already solved for other languages with
>> > existing initiatives like local ambassadors and i18n team? Why
>> > aren't these relevant?
>> [...]
>>
>> It seems like there are at least couple of factors at play here:
>> first the significant number of users and contributors within
>> mainland China compared to other regions (analysis suggests there
>> were nearly as many contributors to the Rocky release from China as
>> the USA), but second there may be facets of Chinese culture which
>> make this sort of demonstrative presence a much stronger signal than
>> it would be in other cultures.
>>
>> > - Pardon my ignorance here, what is the problem with email? (I
>> > understand some chat systems might be blocked, I thought emails
>> > would be fine, and the lowest common denominator).
>>
>> Someone in the TC room (forgive me, I don't recall who now, maybe
>> Rico?) asserted that Chinese contributors generally only read the
>> first message in any given thread (perhaps just looking for possible
>> announcements?) and that if they _do_ attempt to read through some
>> of the longer threads they don't participate in them because the
>> discussion is presumed to be over and decisions final by the time
>> they "reach the end" (I guess not realizing that it's perfectly fine
>> to reply to a month-old discussion and try to help alter course on
>> things if you have an actual concern?).
>>
>
> While I understand the technical issues that could be due using IRC in China, 
> I still don't get why opening the gates and saying WeChat being yet another 
> official channel would prevent our community from fragmenting.
>
> Truly the usage of IRC is certainly questionable, but if we have multiple 
> ways to discuss, I just doubt we could prevent us to silo ourselves between 
> our personal usages.
> Either we consider the new channels as being only for southbound 
> communication, or we envisage the possibility, as a community, to migrate 
> from IRC to elsewhere (I'm particulary not fan of the latter so I would 
> challenge this but I can understand the reasons)
>
> -Sylvain
>

Objectively, I don't see a way to endorse something other than IRC
without some form of collective presence on more than just Wechat to
keep the message intact. IRC is the official messaging platform, for
whatever that's worth these days. However, at present, it makes less
and less sense to explicitly eschew other outlets in favor. From a
Chef OpenStack perspective, the common medium is, perhaps not
unsurprising, code review. Everything else evolved over time to be
southbound paths to the code, including most of the conversation
taking place there as opposed to IRC.

The continuation of this thread only confirms that there is already
fragmentation in the community, and that people on each side of the
void genuinely want to close that gap. At this point, the thing to do
is prevent further fragmentation of the intent. It is, however, far
easier to bikeshed over which platform of choice.

At present, it seems a collective presence is forming ad hoc,
regardless of any such resolution. With some additional coordination
and planning, I think that there could be something that could scale
beyond one or two outlets.

Best,
Samuel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-15 Thread Samuel Cassiba
On Fri, Sep 14, 2018 at 5:25 PM Rico Lin  wrote:
>>
>>
>> For the candidates who are running for tc seats, please reply to this email 
>> to indicate if you are open to use certain social media app in certain 
>> region (like Wechat in China, Line in Japan, etc.), in order to reach out to 
>> the OpenStack developers in that region and help them to connect to the 
>> upstream community as well as answering questions or other activities that 
>> will help. (sorry for the long sentence ... )
>
>
> We definitely need to reach to developers from each location in global. And a 
> way to expose technical community to some place more close to developer and 
> not creating to much burden to all. For me, if we can have channels for 
> broadcast our key information cross entire community (like what's next TC/PTL 
> election, what mission is been proposed, who people can talk to when certain 
> issue happens, who you can talk to when you got great idea, and most 
> importantly where are the right place you should go to) expose to all and 
> maybe encourge community leaders to join. A list of channels is not hard to 
> setup, but it will bring big different IMO and we can always adjust what 
> channel we have. What we can limit here is make sure always help the new 
> joiner to find the right place to engage.
>
> Once we got connected to local developers and community, it's easier for TC 
> to guide all IMO. Will this work? Not sure! So why not we try and find out!:)
>>
>>
>>
>> Rico and I already sign up for Wechat communication for sure :)
>
> Good to have you! Let's do it!!
>
> BTW nice dicsussion today, thanks all who is there in TC room to share.
>

I idle on the unofficial Slack group, which has sporadic activity from
those looking to either connect with the community or find some kind
of support or help. Despite an autoresponder telling people to go
elsewhere, yet more people still sign up and ask questions.

I'm not saying one needs to establish beachheads on all the outlets,
but perhaps the message to get people in the right place should be
better refined. As it sits, the autoresponse on Slack seems like the
cheerful message from the Magratheans right before the warheads are
dispatched. I'm not sure how often that results in a solid conversion
without devoted community ambassadors watching these outlets, but it
doesn't look very inviting from just scrolling through the default
channel history.

I see merit in doing more than having an autoresponder, but I've also
seen first-hand what happens when otherwise diverse communities enter
into a freemium contract. The net result is that people communicate
less and less for various reasons, ending in the inverse of the
desired effect of being more connected.

Best,
Samuel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials)

2018-09-13 Thread Samuel Cassiba
On Thu, Sep 13, 2018 at 9:14 AM, Fox, Kevin M  wrote:
> How about stated this way,
> Its the tc's responsibility to get it done. Either by delegating the 
> activity, or by doing it themselves. But either way, it needs to get done. 
> Its a ball that has been dropped too much in OpenStacks history. If no one is 
> ultimately responsible, balls will keep getting dropped.
>
> Thanks,
> Kevin

I see the role of TC the same way I do the PTL hat, but on more of a
meta scale: too much direct involvement can stifle things. On the
inverse, not enough involvement can result in people saying one's work
is legacy, to be nice, or dead, at worst.

All too often, we humans get hung up on the definitions of words,
sometimes to the point of inaction. It seems only when someone says
sod it do things move forward, regardless of anyone's level of
involvement.

I look to TC as the group that sets the tone, de facto product owners,
to paraphrase from OpenStack's native tongue. The more hands-on an
individual is with the output, TC or not, a perception arises that a
given effort needs only that person's attention; thereby, setting a
much different narrative than might otherwise be immediately noticed
or desired.

The place I see TC is making sure that there is meaningful progress on
agreed-upon efforts, however that needs to exist. Sometimes that might
be recruiting, but I don't see browbeating social media to be
particularly valuable from an individual standpoint. Sometimes that
would be collaborating through code, if it comes down to it. From an
overarching perspective, I view hands-on coding by TC to be somewhat
of a last resort effort due to individual commitments.

Perceptions surrounding actions, like the oft used 'stepping up'
phrase, creates an effect where people do not carve out enough time to
effect change, becoming too busy, repeat ad infinitum.

Best,
Samuel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] VFs not configured in SR-IOV role

2018-09-12 Thread Samuel Monderer
Adding the following to neutron-sriov.yaml solved the problem
OS::TripleO::Services::NeutronSriovHostConfig:
../../puppet/services/neutron-sriov-host-config.yaml

On Wed, Sep 12, 2018 at 11:53 AM Samuel Monderer <
smonde...@vasonanetworks.com> wrote:

> Hi Saravanan,
>
> I'm using RHOSP13.
> The neutron-sriov-agent.yaml is missing "OS::TripleO::Services::
> NeutronSriovHostConfig"
>
> Regards,
> Samuel
>
> On Fri, Sep 7, 2018 at 1:08 PM Saravanan KR  wrote:
>
>> Not sure which version you are using, but the service
>> "OS::TripleO::Services::NeutronSriovHostConfig" is responsible for
>> setting up VFs. Check if this service is enabled in the deployment.
>> One of the missing place is being fixed -
>> https://review.openstack.org/#/c/597985/
>>
>> Regards,
>> Saravanan KR
>> On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer
>>  wrote:
>> >
>> > Hi,
>> >
>> > Attached is the used to deploy an overcloud with SR-IOV role.
>> > The deployment completed successfully but the VFs aren't configured on
>> the host.
>> > Can anyone have a look at what I missed.
>> >
>> > Thanks
>> > Samuel
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] VFs not configured in SR-IOV role

2018-09-12 Thread Samuel Monderer
Hi Saravanan,

I'm using RHOSP13.
The neutron-sriov-agent.yaml is
missing "OS::TripleO::Services::NeutronSriovHostConfig"

Regards,
Samuel

On Fri, Sep 7, 2018 at 1:08 PM Saravanan KR  wrote:

> Not sure which version you are using, but the service
> "OS::TripleO::Services::NeutronSriovHostConfig" is responsible for
> setting up VFs. Check if this service is enabled in the deployment.
> One of the missing place is being fixed -
> https://review.openstack.org/#/c/597985/
>
> Regards,
> Saravanan KR
> On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer
>  wrote:
> >
> > Hi,
> >
> > Attached is the used to deploy an overcloud with SR-IOV role.
> > The deployment completed successfully but the VFs aren't configured on
> the host.
> > Can anyone have a look at what I missed.
> >
> > Thanks
> > Samuel
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc] Opinion about 'PTL' tooling

2018-09-10 Thread Samuel Cassiba
On Mon, Sep 10, 2018 at 6:07 AM, Jeremy Stanley  wrote:
> On 2018-09-10 06:38:11 -0600 (-0600), Mohammed Naser wrote:
>> I think something we should take into consideration is *what* you
>> consider health because the way we’ve gone about it over health
>> checks is not something that can become a toolkit because it was
>> more of question asking, etc
> [...]
>
> I was going to follow up with something similar. It's not as if the
> TC has a toolkit of any sort at this point to come up with the
> information we're assembling in the health tracker either. It's
> built up from interviewing PTLs, reading meeting logs, looking at
> the changes which merge to teams' various deliverable repositories,
> asking around as to whether they've missed important deadlines such
> as release milestones (depending on what release models they
> follow) or PTL nominations, looking over cycle goals to see how far
> along they are, and so on. Extremely time-consuming which is why
> it's taken us most of a release cycle and we still haven't finished
> a first pass.
>
> Assembling some of this information might be automatable if we make
> adjustments to how the data/processes on which it's based are
> maintained, but at this point we're not even sure which ones are
> problem indicators at all and are just trying to provide the
> clearest picture we can. If we come up with a detailed checklist and
> some of the checks on that list can be automated in some way, that
> seems like a good thing. However, the original data should be
> publicly accessible so I don't see why it needs to be members of the
> technical committee who write the software to collect that.
> --
> Jeremy Stanley
>

Things like tracking project health I see like organizing a trash
pickup at the local park, or off the side of a road: dirty,
unglamorous work. The results can be immediately visible to not only
those doing the work, but passers-by. Eliminating the human factor in
deeply human-driven interactions can have ramifications immediately
noticed.

As distributed as things exist today, reducing the conversation to a
few methods or people can damage intent, without humans talking to
humans in a more direct manner.

Best,
Samuel Cassiba (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election] [tc] TC candidacy

2018-09-07 Thread Samuel Cassiba
On Fri, Sep 7, 2018 at 8:55 AM, Samuel Cassiba  wrote:
> On Fri, Sep 7, 2018 at 6:22 AM, Matt Riedemann  wrote:
>> On 9/5/2018 2:49 PM, Samuel Cassiba wrote:
>>>
>>> Though my hands-on experience goes back several releases, I still view
>>> things from the outside-looking-in perspective. Having the outsider
>>> lens is crucial in the long-term for any consensus-driven group,
>>> regardless of that consensus.
>>>
>>> Regardless of the election outcome, this is me taking steps to having a
>>> larger involvement in the overall conversations that drive so much of
>>> our daily lives. At the end of the day, we're all just groups of people
>>> trying to do our jobs. I view this as an opportunity to give back to a
>>> community that has given me so much.
>>
>>
>> Are there specific initiatives you plan on pushing forward if on the TC? I'm
>> thinking about stuff from the laundry list here:
>>
>> https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives
>>
>
> Excellent question!
>
> It's not in my nature to push specific agendas. That said, being in
> the deploy space, constellations is something that does have a
> specific gravity that would, no doubt, draw me in, whether or not I am
> part of the TC. I've viewed projects in the deploy space, such aq
>
> Furthering the adoption of secret management is another thing that
> hits close to home

...and that would be where an unintended keyboard-seeking Odin attack
preemptively initiates a half-thought thought. It's hard to get upset
at this face, though. https://i.imgur.com/c7tktmO.jpg

To that point, projects like Chef have made use of encrypted secrets
since more or less the dawn of time, but not at all in a portable way.
Continuing the work to bring secrets under a single focus is something
that I would also be a part of, with or without being on the TC.

In both of these efforts, I envision having some manner of involvement
no matter what. At the strategic level, working to ensure the
disparate efforts are in alignment is where I would gravitate to.

Best,
Samuel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election] [tc] TC candidacy

2018-09-07 Thread Samuel Cassiba
On Fri, Sep 7, 2018 at 6:22 AM, Matt Riedemann  wrote:
> On 9/5/2018 2:49 PM, Samuel Cassiba wrote:
>>
>> Though my hands-on experience goes back several releases, I still view
>> things from the outside-looking-in perspective. Having the outsider
>> lens is crucial in the long-term for any consensus-driven group,
>> regardless of that consensus.
>>
>> Regardless of the election outcome, this is me taking steps to having a
>> larger involvement in the overall conversations that drive so much of
>> our daily lives. At the end of the day, we're all just groups of people
>> trying to do our jobs. I view this as an opportunity to give back to a
>> community that has given me so much.
>
>
> Are there specific initiatives you plan on pushing forward if on the TC? I'm
> thinking about stuff from the laundry list here:
>
> https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives
>

Excellent question!

It's not in my nature to push specific agendas. That said, being in
the deploy space, constellations is something that does have a
specific gravity that would, no doubt, draw me in, whether or not I am
part of the TC. I've viewed projects in the deploy space, such aq

Furthering the adoption of secret management is another thing that
hits close to home

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC candidacy

2018-09-05 Thread Samuel Cassiba
Hello everybody,

I am announcing my candidacy to be a member of the OpenStack Technical
Committee (TC).

I have been involved in open source since I was a brash youth on the
Internet in the late 1990s, which amounts to over half my life at this
point. I am a self-taught individual, cutting my teeth on BSDs of the
period. I operated in that area for a number of years, becoming a
'shadow' maintainer under various pseudonyms. As time progressed, I
became comfortable attributing my work to my personal identity. o/

My direct involvement with OpenStack began during the Folsom release, as
an operator and deployer. I focused my efforts on automation, eventually
falling in with a crowd that likes puns and cooking references. In my
professional life, I have served as developer, operator, user, and
architect, which extends back to the birthplace of OpenStack.

I am a founding member of Chef OpenStack[0], where I have dutifully
served as PTL for five releases. My community involvement also extends
outside the OpenStack ecosystem, where I serve as a member of Sous
Chefs[1], a group dedicated to the long-term care of critical Chef
community resources.

Though my hands-on experience goes back several releases, I still view
things from the outside-looking-in perspective. Having the outsider
lens is crucial in the long-term for any consensus-driven group,
regardless of that consensus.

Regardless of the election outcome, this is me taking steps to having a
larger involvement in the overall conversations that drive so much of
our daily lives. At the end of the day, we're all just groups of people
trying to do our jobs. I view this as an opportunity to give back to a
community that has given me so much.

Thank you for your attention and consideration,
Samuel Cassiba (scas)

[0] https://docs.openstack.org/openstack-chef/latest/
[1] https://sous-chefs.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen: 7th Edition

2018-09-04 Thread Samuel Cassiba
HTML: https://samuel.cassi.ba/state-of-the-kitchen-7th-edition

This is the seventh installment of what is going on with Chef OpenStack.
The goal is to give a quick overview to see our progress and what is on
the menu. Feedback is always welcome on the content and of what you would
like to see more.

### Notable Changes
* Ironic is returning to
  [active 
development](https://review.openstack.org/#/q/topic:refactor-ironic-cookbook).
  This is currently targeting Rocky, but it will be backported as much
  as automated testing will allow. The cookbook currently works through
  to Tempest and InSpec, but resource constraints prohibit a more
  comprehensive test.
* Chef OpenStack is on
  [docs.o.o](https://docs.openstack.org/openstack-chef/latest/)! It
  currently covers the Kitchen scenario, and needs more fleshed out. A
  more comprehensive deploy guide is in the making.
* Sous Chefs released v5.2.1 of the
  [apache2](https://supermarket.chef.io/cookbooks/apache2) cookbook
  today. This will alleviate an issue with ports.conf conflicting
  between cookbook and package.
* openstack/openstack-chef-repo has served us for many years, but
  nothing is an unmoving mover. Development has shifted over to
  openstack/openstack-chef and openstack-chef-repo will be ferried to the
  great bit bucket in the cloud.
  [o7](https://review.openstack.org/#/q/topic:retire-openstack-chef-repo)

### Integration
* With the aforementioned repo retirement, integration has shifted to
  openstack/openstack-chef.
* Docker stabilization efforts are looking good to introduce a
  containerized integration job for CentOS. Ubuntu still does not play
  nicely using Docker through Kitchen. This will result in gating jobs
  using both the Zuul-provided machine, as well as Docker. The focus is
  AIO at this time.

### Stabilization
* fog-openstack 0.2 has been released, which makes a major change to
  how Keystone endpoints are handled. This is in anticipation for
  dropping a hard version string for Identity API versions.
  0.2.1 has been released to
[rubygems](https://rubygems.org/gems/fog-openstack),
  which will resolve the issues 0.2.0 exposed. For now, however, the
  client cookbook has been constrained to match ChefDK. The target for
  ChefDK to support fog-openstack 0.2 is, at this point, the unreleased
  ChefDK 3.3.0.
  [Further 
context.](http://lists.openstack.org/pipermail/openstack-dev/2018-September/134185.html)

### On The Menu
*The Perfect (Indoor) Steak*
* Kosher salt
* Black pepper
* 1 tbsp (15 ml) olive oil
* 1 (8 to 12 ounce) boneless tenderloin, ribeye or strip steak

1. Set your immersion cooker to 130F (54.4C) -- y'all have one of these,
   right?
2. Generously season both sides with salt and pepper.
3. Place the steak in a medium zipper, or vacuum seal, bag. Seal with a
   vacuum sealer, or using the water immersion technique.
4. Place the bag in the water bath, and set the timer for 2 hours. This
   comes out to about medium-rare consistency.
5. After 2 hours, remove the steak from the water bath and pat very dry
   with paper towels.
6. Heat oil in a medium cast iron skillet over high heat until it
   shimmers.
7. Add steak and sear until well-browned, about 30 seconds per side.
8. Let rest for 5 minutes.
9. Enjoy.

Your humble line cook,
Samuel Cassiba (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] VFs not configured in SR-IOV role

2018-09-04 Thread Samuel Monderer
Hi,

Attached is the used to deploy an overcloud with SR-IOV role.
The deployment completed successfully but the VFs aren't configured on the
host.
Can anyone have a look at what I missed.

Thanks
Samuel
<>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] using multiple roles

2018-09-04 Thread Samuel Monderer
Is it possible to have the roles_data.yaml file generated when running
"openstack overcloud deploy"??

On Tue, Sep 4, 2018 at 4:52 PM Alex Schultz  wrote:

> On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer
>  wrote:
> > Hi,
> >
> > Due to many different HW in our environment we have multiple roles.
> > I would like to place each role definition if a different file.
> > Is it possible to refer to all the roles from roles_data.yaml to all the
> > different files instead of having a long roles_data.yaml file?
> >
>
> So you can have them in different files for general management,
> however in order to actually consume them  they need to be in a
> roles_data.yaml file for the deployment. We offer a few cli commands
> to help with this management.  The 'openstack overcloud roles
> generate' command can be used to generate a roles_data.yaml for your
> deployment. You can store the individual roles in a folder and use the
> 'openstack overcloud roles list --roles-path /your/folder' to view the
> available roles.  This workflow is described in the roles README[0]
>
> Thanks,
> -Alex
>
> [0]
> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst
>
> > Regards,
> > Samuel
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] using multiple roles

2018-09-04 Thread Samuel Monderer
Hi,

Due to many different HW in our environment we have multiple roles.
I would like to place each role definition if a different file.
Is it possible to refer to all the roles from roles_data.yaml to all the
different files instead of having a long roles_data.yaml file?

Regards,
Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] fog-openstack 0.2.0 breakage

2018-09-03 Thread Samuel Cassiba
On Fri, Aug 31, 2018 at 8:59 AM, Samuel Cassiba  wrote:
> Ohai!
>
> fog-openstack 0.2.0 was recently released, which had less than optimal
> effects on Chef OpenStack due to the client cookbook's lack of version
> pinning on the gem.
>

Currently, the client cookbook is pinned to <0.2.0 going back to
Ocata. Supermarket is updated as well.

Due to the fallout generated, 0.2.x will be allowed where ChefDK
introduces it, but 0.2.1 should be usable if you want to give it a go.

Best,
scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Retiring openstack/openstack-chef-repo

2018-09-02 Thread Samuel Cassiba
Ohai!

The entry point to Chef OpenStack, the openstack-chef-repo, is being
retired in favor of openstack/openstack-chef. As such, the watch ends
for openstack/openstack-chef-repo.

From a Chef perspective, openstack-chef-repo has been a perfectly
adequate name, due to the prevalence of monorepos called 'chef-repo'.
In the Chef ecosystem, this makes perfect sense back in 2014 or 2015.
In more recent time, based on the outsider perspective from people who
were not nearly as immersed in the nomenclature, "why do you call it
repo?" had started to emerge as a FAQ.

Both repositories were created with the same intent: the junction of
OpenStack and Chef. However, openstack-chef existed before its time,
boxed and packed away to the attic long before Chef OpenStack was even
a notion.

With the introduction of documentation being published to docs.o.o, it
seemed like the logical time to migrate the entry point back to
openstack/openstack-chef. With assistance from infra doing the heavy
lifting for unretiring the project, openstack-chef was brought down
from the attic and de-mothballed.

At the time of this writing, no new changes are being merged to
openstack-chef-repo, and its jobs are noop. Focus has shifted entirely
to openstack/openstack-chef, with it being the entry point for Zuul
jobs, as well as Kitchen scenarios and documentation.

All stable jobs going back to stable/ocata have been migrated, with
the exception of the Cinder cookbook's Ocata release. It no longer
tests cleanly due to the detritus of time, so it will remain in its
current state.

The retirement festivities can be found at
https://review.openstack.org/#/q/topic:retire-openstack-chef-repo

If you have any questions or concerns, please don't hesitate to reach out.

Best,
Samuel Cassiba (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] fog-openstack 0.2.0 breakage

2018-08-31 Thread Samuel Cassiba
Ohai!

fog-openstack 0.2.0 was recently released, which had less than optimal
effects on Chef OpenStack due to the client cookbook's lack of version
pinning on the gem.

The crucial change is that fog-openstack itself now determines
Identity API versions internally, in preparation for a versionless
Keystone endpoint. Chef OpenStack has carried code for Identity API
determination for years, to facilitate migrating from Identity v2.0 to
Identity v3. Unfortunately, those two methods became at odds with the
release of fog-openstack 0.2.

At the time of this writing, PR #421
(https://github.com/fog/fog-openstack/pull/421) has been merged, but
there is no new release on rubygems.org as of yet. That is likely to
happen Very Soon(tm).

On the home front, with the help of Roger Luethi and Christoph Albers,
we've introduced version constraints to the client cookbook to pin the
gem to 0.1.x. At present, we've merged constraints for master,
stable/queens and stable/pike.

The new release was primed to go into ChefDK 3.2 had it not been
brought up sooner. Thank you to everyone who gave a heads-up!

Best,

scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon!

2018-08-30 Thread Samuel Cassiba
On Thu, Aug 30, 2018 at 4:24 PM, Doug Hellmann  wrote:
> Below is the list of project teams that have not yet started migrating
> their zuul configuration. If you're ready to go, please respond to this
> email to let us know so we can start proposing patches.
>
> Doug
>
> | adjutant| 3 repos   |
> | barbican| 5 repos   |
> | Chef OpenStack  | 19 repos  |
> | cinder  | 6 repos   |
> | cloudkitty  | 5 repos   |
> | I18n| 2 repos   |
> | Infrastructure  | 158 repos |
> | loci| 1 repos   |
> | nova| 6 repos   |
> | OpenStack Charms| 80 repos  |
> | Packaging-rpm   | 4 repos   |
> | Puppet OpenStack| 47 repos  |
> | Quality Assurance   | 22 repos  |
> | Telemetry   | 8 repos   |
> | trove   | 5 repos   |
>

On behalf of Chef OpenStack, that one is good to go.

Best,
Samuel (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stepping down as keystone core

2018-08-29 Thread Samuel de Medeiros Queiroz
Hi Stackers!

It has been both an honor and privilege to serve this community as a
keystone core.

I am in a position that does not allow me enough time to devote reviewing
code and participating of the development process in keystone. As a
consequence, I am stepping down as a core reviewer.

A big thank you for your trust and for helping me to grow both as a person
and as professional during this time in service.

I will stay around: I am doing research on interoperability for my masters
degree, which means I am around the SDK project. In addition to that, I
recently became the Outreachy coordinator for OpenStack.

Let me know if you are interested on one of those things.

Get in touch on #openstack-outreachy, #openstack-sdks or
#openstack-keystone.

Thanks,
Samuel de Medeiros Queiroz (samueldmq)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down as coordinator for the Outreachy internships

2018-08-17 Thread Samuel de Medeiros Queiroz
Hi all,

As someone who cares for this cause and participated twice in this program
as a mentor, I'd like to candidate as program coordinator.

Victoria, thanks for all your lovely work. You are awesome!

Best regards,
Samuel


On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson  wrote:

> You have done such amazing things with the program! We appreciate
> everything you do :) Enjoy the little extra spare time.
>
> -Kendall (daiblo_rojo)
>
>
> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz <
> victo...@vmartinezdelacruz.com> wrote:
>
>> Hi all,
>>
>> I'm reaching you out to let you know that I'll be stepping down as
>> coordinator for OpenStack next round. I had been contributing to this
>> effort for several rounds now and I believe is a good moment for somebody
>> else to take the lead. You all know how important is Outreachy to me and
>> I'm grateful for all the amazing things I've done as part of the Outreachy
>> program and all the great people I've met in the way. I plan to keep
>> involved with the internships but leave the coordination tasks to somebody
>> else.
>>
>> If you are interested in becoming an Outreachy coordinator, let me know
>> and I can share my experience and provide some guidance.
>>
>> Thanks,
>>
>> Victoria
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] deployements fails when using custom nic config

2018-08-16 Thread Samuel Monderer
Hi,

I'm using the attached file for controller nic configuration and I'm
referencing to it as following
resource_registry:
  # Network Interface templates to use (these files must exist). You can
  # override these by including one of the net-*.yaml environment files,
  # such as net-bond-with-vlans.yaml, or modifying the list here.
  # Port assignments for the Controller
  OS::TripleO::Controller::Net::SoftwareConfig:
/home/stack/templates/nic-configs/controller.yaml

and I get the following error

2018-08-16 15:51:59Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]:
CREATE_FAILED  Error: resources[0]: Deployment to server failed:
deploy_status_code : Deployment exited with non-zero status code: 2
2018-08-16 15:51:59Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED
Resource CREATE failed: Error: resources[0]: Deployment to server failed:
deploy_status_code : Deployment exited with non-zero status code: 2
2018-08-16 15:52:00Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED
Error: resources.ControllerDeployment_Step1.resources[0]: Deployment to
server failed: deploy_status_code: Deployment exited with non-zero status
code: 2
2018-08-16 15:52:00Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED
Resource CREATE failed: Error:
resources.ControllerDeployment_Step1.resources[0]: Deployment to server
failed: deploy_status_code: Deployment exited with non-zero status code: 2
2018-08-16 15:52:01Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED  Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2
2018-08-16 15:52:01Z [overcloud]: CREATE_FAILED  Resource CREATE failed:
Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2

 Stack overcloud CREATE_FAILED

overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: 8edfbb96-9b4d-4839-8b17-f8abf0644475
  status: CREATE_FAILED
  status_reason: |
Error: resources[0]: Deployment to server failed: deploy_status_code :
Deployment exited with non-zero status code: 2
  deploy_stdout: |
...
"2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring
neutron",
"2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring
horizon",
"2018-08-16 18:51:54,968 ERROR: 23177 -- ERROR configuring
heat_api_cfn"
]
}
to retry, use: --limit
@/var/lib/heat-config/heat-config-ansible/48a5902a-5987-46e4-a06b-e3f5487bf3d2_playbook.retry

PLAY RECAP
*
localhost  : ok=26   changed=13   unreachable=0
failed=1

(truncated, view all with --long)
  deploy_stderr: |

Heat Stack create failed.
Heat Stack create failed.
(undercloud) [stack@staging-director ~]$

When I checked the controller node I found that it had no default gateway
configured

Regards,
Samuel


controller.yaml
Description: application/yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] network isolation!!! do we still need to configure VLAN , CIDR, ... in network-environment.yaml

2018-08-16 Thread Samuel Monderer
Hi,

In ocata we used network environment  file to configure network parameters
as following

  InternalApiNetCidr: '172.16.2.0/24'
  TenantNetCidr: '172.16.0.0/24'
  ExternalNetCidr: '192.168.204.0/24'
  # Customize the VLAN IDs to match the local environment
  InternalApiNetworkVlanID: 711
  TenantNetworkVlanID: 714
  ExternalNetworkVlanID: 204
  InternalApiAllocationPools: [{'start': '172.16.2.4', 'end':
'172.16.2.250'}]
  TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]
  # Leave room if the external network is also used for floating IPs
  ExternalAllocationPools: [{'start': '192.168.204.6', 'end':
'192.168.204.99'}]

In queens now that we use nerwork_data.yaml do we still need to set the
parameters above?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen: 6th Edition

2018-08-07 Thread Samuel Cassiba
HTML: https://samuel.cassi.ba/state-of-the-kitchen-6th-edition

This is the sixth installment of what is going on with Chef OpenStack.
The goal is to give a quick overview to see our progress and what is on
the menu. Feedback is always welcome on the content and of what you would
like to see more.

### Notable Changes

* In the past month we released Chef OpenStack 17, which aligns with the
  Queens codename of OpenStack. Stabilization efforts
  centered largely around Chef major version updates and further
  leveraging Kitchen for integration testing. At the time of this
  writing, they are mirrored to GitHub and
  [Supermarket](https://supermarket.chef.io/users/openstack){:target="_blank"}.
* openstack-attic/openstack-chef has been brought back from the aether to
  
[openstack/openstack-chef](https://git.openstack.org/cgit/openstack/openstack-chef){:target="_blank"}.
  This is now the starting point for Chef OpenStack integration examples
  and documentation. Many thanks to infra for the smooth de-mothballing.
  A special thanks to fungi for putting on his decoder ring on
  a weekend!
* The openstack-dns (Designate) and overcloud primitives (client)
  cookbooks have been rehomed to the openstack/ namespace, donated by
  jklare, calbers and frickler. (thanks!)
* Support for aodh has been added to the telemetry cookbook. Thanks to
  Seb-Solon for the patches!

### Integration

* Containerization is progressing, but decisions of old are starting to
  need to be revisited. Networking is where the main area of focus needs
  to happen.
* In past releases, Chef OpenStack pared down the integration testing to
  facilitate in landing changes without clogging Zuul. With Zuul v3,
  that allows some of the older methods to be replaced with lighter
  weight playbooks. No doubt, as tests become reimplemented, the impact
  to the build queue times will have to be a consideration again.

### Stabilization

* With Rocky stable packages nearing GA, this means that the cookbooks
  will start focusing on stabilization in earnest. More to come.
* The mariadb 2.0 rewrite has not been released upstream in Sous Chefs.
  We are collaborating to test it in the Chef OpenStack framework and
  make a decision on when to release to Supermarket. The major change
  here is making it a pure set of resources, replacing the now-defunct
  database cookbook.

### On The Menu

*Slow Cooker Pulled Pork*
* 1 pork butt (shoulder cut) -- size matters not here, the same liquid
  measurements go for an average size as well as a large size
* Cookin' Sause (see below)
* 1 cup (240mL) cider vinegar
* 1 cup (240mL) beef stock (water works, too, but we like the flavor)
* 1-2 tsp (5-10mL) liquid smoke

 Cookin' Sause
* 1 cup (340g) yellow mustard
* 1/4 cup (57g) salt
* 1/4 cup (57g) ground black pepper
* 1/4 cup (57g) granulated garlic
* 1/4 cup (57g) granulated onion
* 1/4 cup (57g) ground cayenne

> Combine the spices and the mustard with a whisk. You can use the fancy
stuff here, but it's kind of a waste. Ol' Yella works just fine.
Your food, your call.

 Dippin' Sause -- not cookin' sause!

* 1 can tomato paste
* Cider vinegar
* Red pepper flakes

> There are no measurements on this because it's subjective. Trust your
senses and err on the side of needing to add more.

*to business!*

1. Rub pork butt with cookin' sause. Make that swine sublime.
2. Place that yellow mass of meat in your slow cooker
3. Add cider vinegar, stock, liquid smoke
4. Cook for 7.5-8 hours on low, until fork tender
5. Shred with forks until it doesn't look like mustard
6. Serve with dippin' sause, or use it as drownin' sause
7. Enjoy

Your humble line cook,
Samuel Cassiba (scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Samuel Cassiba
On Wed, Aug 1, 2018 at 5:21 AM, Andrey Kurilin  wrote:
> I can make an assumption that for marketing reasons, Slack Inc can propose
> extended Free plan.
> But anyway, even with default one the only thing which can limit us is
> `10,000 searchable messages` which is bigger than 0 (freenode doesn't store
> messages).
>
>
> Why I like slack? because a lot of people are familar with it (a lot of
> companies use it as like some opensource communities, like k8s )
>
> PS: I realize that OpenStack Community will never go away from Freenode and
> IRC, but I do not want to stay silent.
>

My response wasn't intended to become a wall of text, but my
individual experience dovetails with the ongoing thread. The intent
here is not to focus on one thing or the other, but to highlight some
of the strengths and drawbacks.

This is a great proposal on-paper. As you said, lots of people are
already familiar with the technology and concept at this point. It
generally seems to make sense.

The unfortunate reality is that with something that has N searchable
messages -- that counts for the whole instance -- it will be exceeded
within the first few days due to the initial surge, requiring
tweaking, if possible. Ten thousand messages is not much for a large,
distributed, culturally diverse group heavily entrenched in IRC, even
if it is a nice looking number. There should not be a limit on
recorded history such as that, lest it be forgotten every few months.

From a technological perspective, that puts both such a proposal and
the existing solution at direct odds. Having a proprietary third-party
be the gatekeepers to chat-based outlets is not a good prospect over
the long-term. For recorded history, eavesdrop, by far, exceeds that
imposed value, by sheer virtue of it existing.

In freemium offerings, much knowledge gets blown to the aether in
exchange for gifs and emoji reactions. In these situations, of course,
the users are, by default, the product. The long-term effects which
can have lasting effects on a large, multicultural, open source
project already under siege on certain fronts.

Production OpenStack deployments have usually hitched their wagon to
OpenStack: The Project for a multi-year effort at a minimum, which can
and tends to involve some level of activity in parts of the community
over that time. People come and go, but the long-term goals have
generally remained the same.

While the long-term ramifications of large FLOSS communities being on
freemium proprietary platforms are just beginning to be felt, they're
not quite to the point of inertia yet. Short of paying obscene amounts
of money for chat, FLOSS alternatives need to be championed, far above
any proprietary options with a free welcome mat, no matter how awesome
and feature-rich they may be.

Making a change of this order, this far in, is a drastic undertaking.
I've been witness and participant in a similar migration, which took
place a few years ago. It was heralded with much fanfare, a new day
for engagement. It was full-on party parrot, until it wasn't.

To this day, there are still IRC stragglers, with one or two
experienced -- sometimes self-appointed -- individuals that
tirelessly, asynchronously, answer softball questions and redirect to
the other outlets for the more involved.

Extended community channels, like development channels, are just kind
of left to rot, with a topic that says "Go over here >". There is
very little moderation, which develops a certain narrative all on its
own.

Today, that community on the free offering is quieter, more vibrant
and immediately knowledgeable, albeit at the expense of recorded
history. Questions take on a recurring theme at times, requiring
one-to-one or one-to-many engagement for every question. The person
wanting some fish tonight doesn't have a clean lake or stream to catch
their dinner.

Unfortunately, some of those long-term effects are beginning to be
felt as of recent, after "everyone" is off of IRC. Fewer long-term
maintainers are sticking around, and even fewer are stepping up to
replace them. On the upshot, there are more new users always finding
their way to the slick proprietary chat group.

-scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] deployement fails

2018-07-31 Thread Samuel Monderer
I used the same host network configuration I used with Ocata (see attached)
Do I need to change them if I'm deploying queens??

Thanks,
Samuel

On Tue, Jul 31, 2018 at 7:06 PM Alex Schultz  wrote:

> On Mon, Jul 30, 2018 at 8:48 AM, Samuel Monderer
>  wrote:
> > Hi,
> >
> > I'm trying to deploy a small environment with one controller and one
> compute
> > but i get a timeout with no specific information in the logs
> >
> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> > CREATE_IN_PROGRESS  state changed
> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> > CREATE_COMPLETE  state changed
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED  CREATE
> > aborted (Task create from ResourceGroup "ComputeGammaV3" Stack
> "overcloud"
> > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Stack
> UPDATE
> > cancelled
> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Stack
> > CREATE cancelled
> > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED  CREATE
> aborted
> > (Task create from ResourceGroup "Controller" Stack "overcloud"
> > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED  Stack UPDATE
> > cancelled
> > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED  Stack
> CREATE
> > cancelled
> > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
> > resources[0]: Stack CREATE cancelled
> >
> >  Stack overcloud CREATE_FAILED
> >
> > overcloud.ComputeGammaV3.0:
> >   resource_type: OS::TripleO::ComputeGammaV3
> >   physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7
> >   status: CREATE_FAILED
> >   status_reason: |
> > resources[0]: Stack CREATE cancelled
> > overcloud.Controller.0:
> >   resource_type: OS::TripleO::Controller
> >   physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7
> >   status: CREATE_FAILED
> >   status_reason: |
> > resources[0]: Stack CREATE cancelled
> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> > Heat Stack create failed.
> > Heat Stack create failed.
> > (undercloud) [stack@staging-director ~]$
> >
>
> So this is a timeout likely caused by a bad network configuration so
> no response makes it back to Heat during the deployment. Heat never
> gets a response back so it just times out.  You'll need to check your
> host network configuration and trouble shoot that.
>
> Thanks,
> -Alex
>
> > It seems that it wasn't able to configure the OVS bridges
> >
> > (undercloud) [stack@staging-director ~]$ openstack software deployment
> show
> > 4b4fc54f-7912-40e2-8ad4-79f6179fe701
> >
> +---++
> > | Field | Value
> |
> >
> +---++
> > | id| 4b4fc54f-7912-40e2-8ad4-79f6179fe701
>  |
> > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b
>  |
> > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f
>  |
> > | creation_time | 2018-07-30T13:19:44Z
>  |
> > | updated_time  |
> |
> > | status| IN_PROGRESS
> |
> > | status_reason | Deploy data available
> |
> > | input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'}
> |
> > | action| CREATE
>  |
> >
> +---++
> > (undercloud) [stack@staging-director ~]$ openstack software deployment
> show
> > a297e8ae-f4c9-41b0-938f-c51f9fe23843
> >
> +---++
> > | Field | Value
> |
> >
> +---++
> > | id| a297e8ae-f4c9-41b0-938f-c51f9fe23843
>  |
> > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84
>  |
> > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f
>  |
> > | creation_time | 2018-07-30T13:17:29Z
>  |
> > | updated_time  |
> |
> > | status| IN_PROGRESS
> |
> > | status_reason | Deploy data available
> |
> > | input_values  | {u'interface_name': u'nic1', u'bridg

[openstack-dev] [tripleo] overcloud deployment fails with during keystone configuration

2018-07-31 Thread Samuel Monderer
Hi,

My overcloud deployment fails with the following error

2018-07-31 14:20:23Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_FAILED
Resource CREATE failed: Error: resources[0]: Deployment to server failed:
deploy_status_code : Deployment exited with non-zero status code: 2
2018-07-31 14:20:24Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_FAILED
Error: resources.ControllerDeployment_Step3.resources[0]: Deployment to
server failed: deploy_status_code: Deployment exited with non-zero status
code: 2
2018-07-31 14:20:24Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED
Resource CREATE failed: Error:
resources.ControllerDeployment_Step3.resources[0]: Deployment to server
failed: deploy_status_code: Deployment exited with non-zero status code: 2
2018-07-31 14:20:25Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED  Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step3.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2
2018-07-31 14:20:25Z [overcloud]: CREATE_FAILED  Resource CREATE failed:
Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step3.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2

 Stack overcloud CREATE_FAILED

overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: 69fd1d02-7e20-4d91-a7b4-552cdf4e42f2
  status: CREATE_FAILED
  status_reason: |
Error: resources[0]: Deployment to server failed: deploy_status_code :
Deployment exited with non-zero status code: 2
  deploy_stdout: |
...
"+ exit 1",
"2018-07-31 17:20:19,292 INFO: 74435 -- Finished processing
puppet configs for keystone_init_tasks",
"2018-07-31 17:20:19,293 ERROR: 74434 -- ERROR configuring
keystone_init_tasks"
]
}
to retry, use: --limit
@/var/lib/heat-config/heat-config-ansible/2fa9a52f-7e15-43fc-b67e-1ae358468790_playbook.retry

PLAY RECAP
*
localhost  : ok=9changed=2unreachable=0
failed=1

(truncated, view all with --long)
  deploy_stderr: |

Not cleaning temporary directory /tmp/tripleoclient-D67O5V
Not cleaning temporary directory /tmp/tripleoclient-D67O5V
Heat Stack create failed.
Heat Stack create failed.
(undercloud) [stack@staging-director ~]$


In the director keystone log I get the following

2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi
[req-22ee40c6-6daa-428d-aa39-06a96a4d5d3d - - - - -]
(pymysql.err.ProgrammingError) (1146, u"Table 'keystone.project' doesn't
exist") [SQL: u'SELECT project.id
 AS project_id, project.name AS project_name, project.domain_id AS
project_domain_id, project.description AS project_description,
project.enabled AS project_enabled, project.extra AS project_extra,
project.parent_
id AS project_parent_id, project.is_domain AS project_is_domain \nFROM
project \nWHERE project.is_domain = true'] (Background on this error at:
http://sqlalche.me/e/f405): ProgrammingError: (pymysql.err.Programmin
gError) (1146, u"Table 'keystone.project' doesn't exist") [SQL: u'SELECT
project.id AS project_id, project.name AS project_name, project.domain_id
AS project_domain_id, project.description AS project_description,
project.enabled AS project_enabled, project.extra AS project_extra,
project.parent_id AS project_parent_id, project.is_domain AS
project_is_domain \nFROM project \nWHERE project.is_domain = true']
(Background on t
his error at: http://sqlalche.me/e/f405)
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi Traceback (most
recent call last):
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi   File
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 226, in
__call__
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi result =
method(req, **params)
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi   File
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 126,
in wrapper
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return f(self,
request, filters, **kwargs)
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi   File
"/usr/lib/python2.7/site-packages/keystone/resource/controllers.py", line
54, in list_domains
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi refs =
PROVIDERS.resource_api.list_domains(hints=hints)
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi   File
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 116, in
wrapped
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi __ret_val =
__f(*args, **kwargs)
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi   File
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 68, in
wrapper
2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return f(self,
*args, **kwargs)
2018-07-31 17:17:25.592 32 

Re: [openstack-dev] [tripleo] deployement fails

2018-07-31 Thread Samuel Monderer
Removing it just made it longer to time out

On Mon, Jul 30, 2018 at 7:51 PM, Remo Mattei  wrote:

> Take it off and check :)
>
>
>
> On Jul 30, 2018, at 09:46, Samuel Monderer 
> wrote:
>
> Yes
> I tried eith 60 and 120
>
> On Mon, Jul 30, 2018, 19:42 Remo Mattei  wrote:
>
>> Do you have a timeout set?
>>
>> > On Jul 30, 2018, at 07:48, Samuel Monderer <
>> smonde...@vasonanetworks.com> wrote:
>> >
>> > Hi,
>> >
>> > I'm trying to deploy a small environment with one controller and one
>> compute but i get a timeout with no specific information in the logs
>> >
>> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
>> CREATE_IN_PROGRESS  state changed
>> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
>> CREATE_COMPLETE  state changed
>> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED  CREATE
>> aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud"
>> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
>> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Stack
>> UPDATE cancelled
>> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
>> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Stack
>> CREATE cancelled
>> > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED  CREATE
>> aborted (Task create from ResourceGroup "Controller" Stack "overcloud"
>> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
>> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
>> > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED  Stack
>> UPDATE cancelled
>> > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED  Stack
>> CREATE cancelled
>> > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
>> resources[0]: Stack CREATE cancelled
>> >
>> >  Stack overcloud CREATE_FAILED
>> >
>> > overcloud.ComputeGammaV3.0:
>> >   resource_type: OS::TripleO::ComputeGammaV3
>> >   physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7
>> >   status: CREATE_FAILED
>> >   status_reason: |
>> > resources[0]: Stack CREATE cancelled
>> > overcloud.Controller.0:
>> >   resource_type: OS::TripleO::Controller
>> >   physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7
>> >   status: CREATE_FAILED
>> >   status_reason: |
>> > resources[0]: Stack CREATE cancelled
>> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
>> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
>> > Heat Stack create failed.
>> > Heat Stack create failed.
>> > (undercloud) [stack@staging-director ~]$
>> >
>> > It seems that it wasn't able to configure the OVS bridges
>> >
>> > (undercloud) [stack@staging-director ~]$ openstack software deployment
>> show 4b4fc54f-7912-40e2-8ad4-79f6179fe701
>> > +---+---
>> -+
>> > | Field | Value
>>   |
>> > +---+---
>> -+
>> > | id| 4b4fc54f-7912-40e2-8ad4-79f6179fe701
>>|
>> > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b
>>|
>> > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f
>>|
>> > | creation_time | 2018-07-30T13:19:44Z
>>  |
>> > | updated_time  |
>>   |
>> > | status| IN_PROGRESS
>>   |
>> > | status_reason | Deploy data available
>>   |
>> > | input_values  | {u'interface_name': u'nic1', u'bridge_name':
>> u'br-ex'} |
>> > | action| CREATE
>>  |
>> > +---+---
>> -+
>> > (undercloud) [stack@staging-director ~]$ openstack software deployment
>> show a297e8ae-f4c9-41b0-938f-c51f9fe23843
>> > +---+---
>> -+
>> > | Field | Value
>>   |
>> > +---+---
>> -+
>> > | id| a297e8ae-f4c9-41b0-938f-c51f9fe23843
>>|
>> > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84
>>|
>> > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f

Re: [openstack-dev] [tripleo] deployement fails

2018-07-30 Thread Samuel Monderer
Yes
I tried eith 60 and 120

On Mon, Jul 30, 2018, 19:42 Remo Mattei  wrote:

> Do you have a timeout set?
>
> > On Jul 30, 2018, at 07:48, Samuel Monderer 
> wrote:
> >
> > Hi,
> >
> > I'm trying to deploy a small environment with one controller and one
> compute but i get a timeout with no specific information in the logs
> >
> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> CREATE_IN_PROGRESS  state changed
> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> CREATE_COMPLETE  state changed
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED  CREATE
> aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud"
> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Stack
> UPDATE cancelled
> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Stack
> CREATE cancelled
> > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED  CREATE
> aborted (Task create from ResourceGroup "Controller" Stack "overcloud"
> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED  Stack UPDATE
> cancelled
> > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED  Stack
> CREATE cancelled
> > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
> resources[0]: Stack CREATE cancelled
> >
> >  Stack overcloud CREATE_FAILED
> >
> > overcloud.ComputeGammaV3.0:
> >   resource_type: OS::TripleO::ComputeGammaV3
> >   physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7
> >   status: CREATE_FAILED
> >   status_reason: |
> > resources[0]: Stack CREATE cancelled
> > overcloud.Controller.0:
> >   resource_type: OS::TripleO::Controller
> >   physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7
> >   status: CREATE_FAILED
> >   status_reason: |
> > resources[0]: Stack CREATE cancelled
> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> > Heat Stack create failed.
> > Heat Stack create failed.
> > (undercloud) [stack@staging-director ~]$
> >
> > It seems that it wasn't able to configure the OVS bridges
> >
> > (undercloud) [stack@staging-director ~]$ openstack software deployment
> show 4b4fc54f-7912-40e2-8ad4-79f6179fe701
> >
> +---++
> > | Field | Value
> |
> >
> +---++
> > | id| 4b4fc54f-7912-40e2-8ad4-79f6179fe701
>  |
> > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b
>  |
> > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f
>  |
> > | creation_time | 2018-07-30T13:19:44Z
>  |
> > | updated_time  |
> |
> > | status| IN_PROGRESS
> |
> > | status_reason | Deploy data available
> |
> > | input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'}
> |
> > | action| CREATE
>  |
> >
> +---++
> > (undercloud) [stack@staging-director ~]$ openstack software deployment
> show a297e8ae-f4c9-41b0-938f-c51f9fe23843
> >
> +---++
> > | Field | Value
> |
> >
> +---++
> > | id| a297e8ae-f4c9-41b0-938f-c51f9fe23843
>  |
> > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84
>  |
> > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f
>  |
> > | creation_time | 2018-07-30T13:17:29Z
>  |
> > | updated_time  |
> |
> > | status| IN_PROGRESS
> |
> > | status_reason | Deploy data available
> |
> > | input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'}
> |
> > | action| CREATE
>  |
> >
> +---++
> > (undercloud) [stack@staging-director ~]$
> >
> > Regards,
> > Samuel
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:u

[openstack-dev] [tripleo] deployement fails

2018-07-30 Thread Samuel Monderer
Hi,

I'm trying to deploy a small environment with one controller and one
compute but i get a timeout with no specific information in the logs

2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
CREATE_IN_PROGRESS  state changed
2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
CREATE_COMPLETE  state changed
2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED  CREATE
aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud"
[690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Stack
UPDATE cancelled
2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Stack
CREATE cancelled
2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED  CREATE aborted
(Task create from ResourceGroup "Controller" Stack "overcloud"
[690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED  Stack UPDATE
cancelled
2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED  Stack CREATE
cancelled
2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
resources[0]: Stack CREATE cancelled

 Stack overcloud CREATE_FAILED

overcloud.ComputeGammaV3.0:
  resource_type: OS::TripleO::ComputeGammaV3
  physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7
  status: CREATE_FAILED
  status_reason: |
resources[0]: Stack CREATE cancelled
overcloud.Controller.0:
  resource_type: OS::TripleO::Controller
  physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7
  status: CREATE_FAILED
  status_reason: |
resources[0]: Stack CREATE cancelled
Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
Heat Stack create failed.
Heat Stack create failed.
(undercloud) [stack@staging-director ~]$

It seems that it wasn't able to configure the OVS bridges

(undercloud) [stack@staging-director ~]$ openstack software deployment show
4b4fc54f-7912-40e2-8ad4-79f6179fe701
+---++
| Field | Value  |
+---++
| id| 4b4fc54f-7912-40e2-8ad4-79f6179fe701   |
| server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b   |
| config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f   |
| creation_time | 2018-07-30T13:19:44Z   |
| updated_time  ||
| status| IN_PROGRESS|
| status_reason | Deploy data available  |
| input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} |
| action| CREATE |
+---++
(undercloud) [stack@staging-director ~]$ openstack software deployment show
a297e8ae-f4c9-41b0-938f-c51f9fe23843
+---++
| Field | Value  |
+---++
| id| a297e8ae-f4c9-41b0-938f-c51f9fe23843   |
| server_id | 145167da-9b96-4eee-bfe9-399b854c1e84   |
| config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f   |
| creation_time | 2018-07-30T13:17:29Z   |
| updated_time  ||
| status| IN_PROGRESS|
| status_reason | Deploy data available  |
| input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} |
| action| CREATE |
+---++
(undercloud) [stack@staging-director ~]$

Regards,
Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-28 Thread Samuel Monderer
Hi,

With my nic configs I get the following error

2018-07-26 16:42:49Z [overcloud.ComputeGammaV3.0.NetworkConfig]:
CREATE_FAILED  resources.NetworkConfig: Parameter
'InternalApiNetworkVlanID' is invalid: could not convert string to float:
2018-07-26 16:42:49Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Resource
CREATE failed: resources.NetworkConfig: Parameter
'InternalApiNetworkVlanID' is invalid: could not convert string to float:
2018-07-26 16:42:50Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
resources.NetworkConfig: resources[0].Parameter 'InternalApiNetworkVlanID'
is invalid: could not convert string to float:
2018-07-26 16:42:50Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Resource
CREATE failed: resources.NetworkConfig: resources[0].Parameter
'InternalApiNetworkVlanID' is invalid: could not convert string to float:
2018-07-26 16:42:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED
resources.ComputeGammaV3: Resource CREATE failed: resources.NetworkConfig:
resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not
convert string to float:
2018-07-26 16:42:51Z [overcloud]: CREATE_FAILED  Resource CREATE failed:
resources.ComputeGammaV3: Resource CREATE failed: resources.NetworkConfig:
resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not
convert string to float:
2018-07-26 16:42:51Z [overcloud.ComputeGammaV3.0.NetIpMap]:
CREATE_COMPLETE  state changed

 Stack overcloud CREATE_FAILED

overcloud.ComputeGammaV3.0.NetworkConfig:
  resource_type: OS::TripleO::ComputeGammaV3::Net::SoftwareConfig
  physical_resource_id:
  status: CREATE_FAILED
  status_reason: |
resources.NetworkConfig: Parameter 'InternalApiNetworkVlanID' is
invalid: could not convert string to float:
Heat Stack create failed.
Heat Stack create failed.
(undercloud) [stack@staging-director ~]$ packet_write_wait: Connection to
192.168.50.30 port 22: Broken pipe


The parameter is defined as following in nic config file

  InternalApiNetworkVlanID:
default: ''
description: Vlan ID for the internal_api network traffic.
type: number

I worked fine when I was using RHOSP11(Ocata)

The custom_network_data.yaml defines the internal network as following

- name: InternalApi
  name_lower: internal_api
  vip: true
  vlan: 711
  ip_subnet: '172.16.2.0/24'
  allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]

Samuel

On Fri, Jul 27, 2018 at 7:41 PM, James Slagle 
wrote:

> On Thu, Jul 26, 2018 at 4:58 AM, Samuel Monderer
>  wrote:
> > Hi James,
> >
> > I understand the network-environment.yaml will also be generated.
> > What do you mean by rendered path? Will it be
> > "usr/share/openstack-tripleo-heat-templates/network/ports/"?
>
> Yes, the rendered path is the path that the jinja2 templating process
> creates.
>
> > By the way I didn't find any other place in my templates where I refer to
> > these files?
> > What about custom nic configs is there also a jinja2 process to create
> them?
>
> No. custom nic configs are by definition, custom to the environment
> you are deploying. Only you know how to properly define what newtork
> configurations needs applying.
>
> Our sample nic configs are generated from jinja2 now. For example:
> tripleo-heat-templates/network/config/single-nic-vlans/role.role.j2.yaml
>
> If you wanted to follow that pattern such that your custom nic config
> templates were generated, you could do that
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] PTL candidacy for Stein

2018-07-27 Thread Samuel Cassiba
Howdy!

I am submitting my name to continue as PTL for Chef OpenStack. If you
don't know me, I am scas on Freenode. I work for Workday, where I am an
active operator and upstream developer. I have contributed to OpenStack
since 2014, and joined the Chef core team in early 2015. Since then, I have
served as PTL for four cycles. I am also an active member of the
Sous-Chefs organization, which fosters maintainership of community Chef
cookbooks that could no longer be maintained by their author(s). My life
as a triple threat, as well as being largely in the deploy automation
space, gives me a unique perspective on the use cases for Chef
OpenStack.

Development continues to run about a release behind the coordinated
release to stabilize due to contributor availability. In that time,
overall testing has improved to raise the overall testing confidence in
landing more aggressive changes. Local testing infrastructure tends to
run closer to trunk to keep a pulse on how upstream changes will affect
the cookbooks closer to review time. This, in turn, influences the
changes that do pass the sniff test.

For Stein, I would like to focus on some of the efforts started during
Rocky.

* Awareness and Community

  Chef OpenStack is extremely powerful and flexible, but it is not easy
  for new contributors to get involved. That is, if they can find it,
  down the dark alley, through the barber shop, and behind the door with
  a secret knock. Documentation has been a handful of terse Markdown
  docs and READMEs that do not evolve as fast as the code, which I think
  impacts visibility and artificially creates a barrier to entry. I
  would like to place more emphasis on providing this more well-lit
  entry point for new and existing users alike.

* Consistency and HA

  Stability is never a given, but it is pretty close with Chef
  OpenStack. Each change runs through multiple, iterative tests before
  it hits Gerrit. However, not every change runs through those same
  tests in the gate due to the gap between local and integration. This
  natural gap has resulted in multiple chef-client versions and
  OpenStack configurations testing each change.  There have existed HA
  primitives in the cookbooks for years, but there are no published
  working examples. I am aiming to continue this effort to further
  reducing the human element in executing the tests.

* Continued work on containerization

  With efforts to deploy OpenStack in the context of containers, Chef
  OpenStack has not shared in the fanfare. I shipped a very shaky dokken
  support out of a hack day at the 2017 Chef Community Summit in
  Seattle, and have refined it over time to where it's consistently
  Doing A Thing. I have found regressions upstream (e.g. packaging), and
  have conservatively implemented workarounds to coax things into
  submission when the actual fix would take more months to land.  I wish
  to continue that effort, and expand to other Ansible-based and
  Kitchen-based integration scenarios to provide examples of how to get
  to OpenStack using Chef.

These are but some of my personal goals and aspirations. I hope to be
able to make progress on them all, but reality may temper those
aspirations.

I would love to connect with more new users and contributors. You can
reach out to me directly, or find me in #openstack-chef.

Thanks!

-scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-26 Thread Samuel Monderer
Hi James,

I understand the network-environment.yaml will also be generated.
What do you mean by rendered path? Will it be
"usr/share/openstack-tripleo-heat-templates/network/ports/"?
By the way I didn't find any other place in my templates where I refer to
these files?
What about custom nic configs is there also a jinja2 process to create them?

Samuel

On Thu, Jul 26, 2018 at 12:02 AM James Slagle 
wrote:

> On Wed, Jul 25, 2018 at 11:56 AM, Samuel Monderer
>  wrote:
> > Hi,
> >
> > I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens)
> > In my network-isolation I refer to files that do not exist anymore on the
> > director such as
> >
> >   OS::TripleO::Compute::Ports::ExternalPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
> >   OS::TripleO::Compute::Ports::InternalApiPort:
> >
> /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
> >   OS::TripleO::Compute::Ports::StoragePort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
> >   OS::TripleO::Compute::Ports::StorageMgmtPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
> >   OS::TripleO::Compute::Ports::TenantPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
> >   OS::TripleO::Compute::Ports::ManagementPort:
> >
> /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml
> >
> > Where have they gone?
>
> These files are now generated from network/ports/port.network.j2.yaml
> during the jinja2 template rendering process. They will be created
> automatically during the overcloud deployment based on the enabled
> networks from network_data.yaml.
>
> You still need to refer to the rendered path (as shown in your
> example) in the various resource_registry entries.
>
> This work was done to enable full customization of the created
> networks used for the deployment. See:
>
> https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html
>
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Setting swift as glance backend

2018-07-25 Thread Samuel Monderer
Hi,

I would like to deploy a small overcloud with just one controller and one
compute for testing.
I want to use swift as the glance backend.
How do I configure the overcloud templates?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-25 Thread Samuel Monderer
Hi,

I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens)
In my network-isolation I refer to files that do not exist anymore on the
director such as

  OS::TripleO::Compute::Ports::ExternalPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Compute::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::StoragePort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::TenantPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Compute::Ports::ManagementPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml

Where have they gone?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral workflow cannot establish connection

2018-07-25 Thread Samuel Monderer
Hi Steve,

You were right, when I removed most of the roles it worked.

I've encountered another problem. It seems that the network-isolation.yaml
I used with OSP11 is pointing to files that do not exist anymore such as

*  # Port assignments for the Controller role*
*  OS::TripleO::Controller::Ports::ExternalPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml*
*  OS::TripleO::Controller::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml*
*  OS::TripleO::Controller::Ports::StoragePort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml*
*  OS::TripleO::Controller::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml*
*  OS::TripleO::Controller::Ports::TenantPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml*
*  OS::TripleO::Controller::Ports::ManagementPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml*

Have they moved to a different location or are they created during the
overcloud deployment??

Thanks
Samuel

On Mon, Jul 16, 2018 at 3:06 PM Steven Hardy  wrote:

> On Sun, Jul 15, 2018 at 7:50 PM, Samuel Monderer
>  wrote:
> >
> > Hi Remo,
> >
> > Attached are templates I used for the deployment. They are based on a
> deployment we did with OSP11.
> > I made the changes for it to work with OSP13.
> >
> > I do think it's the roles_data.yaml file that is causing the error
> because if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the
> deployment script the deployment passes the point it was failing before but
> fails much later because of the missing definition of the role.
>
> I can't see a problem with the roles_data.yaml you provided, it seems
> to render ok using tripleo-heat-templates/tools/process-templates.py -
> are you sure the error isn't related to uploading the roles_data file
> to the swift container?
>
> I'd check basic CLI access to swift as a sanity check, e.g something like:
>
> openstack container list
>
> and writing the roles data e.g:
>
> openstack object create overcloud roles_data.yaml
>
> If that works OK then it may be an haproxy timeout - you are
> specifying quite a lot of roles, so I wonder if something is timing
> out during the plan creation phase - we had some similar issues in CI
> ref https://bugs.launchpad.net/tripleo-quickstart/+bug/1638908 where
> increasing the haproxy timeouts helped.
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral workflow cannot establish connection

2018-07-15 Thread Samuel Monderer
Hi Remo,

Attached are templates I used for the deployment. They are based on a
deployment we did with OSP11.
I made the changes for it to work with OSP13.

I do think it's the roles_data.yaml file that is causing the error because
if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the deployment
script the deployment passes the point it was failing before but fails much
later because of the missing definition of the role.

Samuel

On Sun, Jul 15, 2018 at 8:35 PM Remo Mattei  wrote:

> I still think there is something wrong with some of your yaml, the
> roles_data is elaborating based on what your yaml files are. Can you share
> your deployment script did you make any of the yaml files yourself?
>
> Remo
>
> On Jul 15, 2018, at 8:57 AM, Remo Mattei  wrote:
>
> Here is the one I use
>
> 
>
>
>
> On Jul 15, 2018, at 8:02 AM, Samuel Monderer 
> wrote:
>
> It seems that the problem is in my roles_data.yaml file but I don't see
> what is the problem
> I've attached the file.
>
> On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei  wrote:
>
>> It is a bad line in one of your yaml file. I would check them.
>>
>> Sent from my iPad
>>
>> On Jul 14, 2018, at 2:25 PM, Samuel Monderer <
>> smonde...@vasonanetworks.com> wrote:
>>
>> Hi,
>>
>> I'm trying to deploy redhat OSP13 but I get the following error.
>> (undercloud) [root@staging-director stack]# ./templates/deploy.sh
>> Started Mistral Workflow
>> tripleo.validations.v1.check_pre_deployment_validations. Execution ID:
>> 3ba53aa3-56c5-4024-8d62-bafad967f7c2
>> Waiting for messages on queue 'tripleo' with no timeout.
>> Removing the current plan files
>> Uploading new plan files
>> Started Mistral Workflow
>> tripleo.plan_management.v1.update_deployment_plan. Execution ID:
>> ff359b14-78d7-4b64-8b09-6ec3c4697d71
>> Plan updated.
>> Processing templates in the directory
>> /tmp/tripleoclient-ae4yIf/tripleo-heat-templates
>> Unable to establish connection to
>> https://192.168.50.30:13989/v2/action_executions: ('Connection
>> aborted.', BadStatusLine("''",))
>> (undercloud) [root@staging-director stack]#
>>
>> Couldn't find any info in the logs of what causes the error.
>>
>> Samuel
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral workflow cannot establish connection

2018-07-15 Thread Samuel Monderer
It seems that the problem is in my roles_data.yaml file but I don't see
what is the problem
I've attached the file.

On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei  wrote:

> It is a bad line in one of your yaml file. I would check them.
>
> Sent from my iPad
>
> On Jul 14, 2018, at 2:25 PM, Samuel Monderer 
> wrote:
>
> Hi,
>
> I'm trying to deploy redhat OSP13 but I get the following error.
> (undercloud) [root@staging-director stack]# ./templates/deploy.sh
> Started Mistral Workflow
> tripleo.validations.v1.check_pre_deployment_validations. Execution ID:
> 3ba53aa3-56c5-4024-8d62-bafad967f7c2
> Waiting for messages on queue 'tripleo' with no timeout.
> Removing the current plan files
> Uploading new plan files
> Started Mistral Workflow
> tripleo.plan_management.v1.update_deployment_plan. Execution ID:
> ff359b14-78d7-4b64-8b09-6ec3c4697d71
> Plan updated.
> Processing templates in the directory
> /tmp/tripleoclient-ae4yIf/tripleo-heat-templates
> Unable to establish connection to
> https://192.168.50.30:13989/v2/action_executions: ('Connection aborted.',
> BadStatusLine("''",))
> (undercloud) [root@staging-director stack]#
>
> Couldn't find any info in the logs of what causes the error.
>
> Samuel
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


roles_data.yaml
Description: application/yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Mistral workflow cannot establish connection

2018-07-14 Thread Samuel Monderer
Hi,

I'm trying to deploy redhat OSP13 but I get the following error.
(undercloud) [root@staging-director stack]# ./templates/deploy.sh
Started Mistral Workflow
tripleo.validations.v1.check_pre_deployment_validations. Execution ID:
3ba53aa3-56c5-4024-8d62-bafad967f7c2
Waiting for messages on queue 'tripleo' with no timeout.
Removing the current plan files
Uploading new plan files
Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan.
Execution ID: ff359b14-78d7-4b64-8b09-6ec3c4697d71
Plan updated.
Processing templates in the directory
/tmp/tripleoclient-ae4yIf/tripleo-heat-templates
Unable to establish connection to
https://192.168.50.30:13989/v2/action_executions: ('Connection aborted.',
BadStatusLine("''",))
(undercloud) [root@staging-director stack]#

Couldn't find any info in the logs of what causes the error.

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen: 5th Edition

2018-07-03 Thread Samuel Cassiba
HTML: https://s.cassiba.com/openstack/state-of-the-kitchen-5th-edition/

This is the fifth installment of what is going on with Chef OpenStack. The aim
is to give a quick overview to see our progress and what is on the menu.
Feedback is always welcome on the content and what you'd like to see.

Last month's edition was rather delayed due to an emergency surgery on one of
my cats (he's doing fine) but other things took priority. Going forward, I'm
going to stick as close to to the beginning of the month as I can.

This will be a thin installment, as there were only a few things of note.

### Notable Changes
* Nova APIs are now WSGI services handled by Apache.
  <https://review.openstack.org/575785>
* Keystone has been reduced down to a single 'public' endpoint.
  <https://review.openstack.org/#/q/topic:bp/simplify-identity-endpoint>

### Integration
* Dokken works on both platforms with an ugly workaround. Presently, this
  results in allinone scenarios converging and testing inside a container.
  <https://review.openstack.org/577814>

### Upcoming
* Testing against RDO Rocky packages works. More to come, probably in a blog
  post.
* Ubuntu 18.04 results in a mostly functional OpenStack instance, but it bumps
  into Python 3 problems along the way.
* The mariadb cookbook has undergone a significant refactor resulting in a
  2.0.0, but might not be updated until the focus switches to Rocky.

### On The Menu
*Not Really "Instant" Roast Beast* (makes 4 servings, 2 if you're hungry)

* 3 lbs / 1.3 kg bottom round beef roast, frozen to aid tenderizing
* 3 cups / 700ml beef stock
* 1 medium onion, sliced
* 1 tsp / 4.2g minced garlic
* Ground cayenne, granulated onion and garlic to taste.

1. Add a layer of sliced onions to the bottom of your electric pressure cooker
   (you DO have one, right?)
2. Add frozen(!) meat on top of the onions.
3. Add garlic, remaining onion pieces and powdered spices to the cooker.
   Do NOT add salt at this stage, as tempting as it may be.
4. Cook at medium pressure for 90 minutes. Allow for the pressure to reduce
   naturally. It can take an additional 30 minutes or more. Patience is
   rewarded.
5. Remove roast to a large dish, shred until it's to your preferred consistency.
   Optionally, remove the onion pieces, they've given their all.
6. Add xanthan gum or your preferred choice of thickener. Use
cornstarch or flour
   if you're not super carb-conscious. Hit it with the immersion blender.
7. Return shredded meat to what could be misconstrued as gravy. Salt to taste
   and dig in. It gets better if you leave it overnight in the fridge to
   allow the flavors to redistribute.

Your humble line cook,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [release] How to handle "stable" deliverables releases

2018-06-12 Thread Samuel Cassiba
On Mon, Jun 11, 2018 at 2:53 AM, Thierry Carrez  wrote:
>
> 2bis/ Like 2, but only create the branch when needed
>
> Same as the previous one, except that rather than proactively create the
> stable branch around release time, we'd wait until the branch is actually
> needed to create it.
>

This is basically openstack-chef right now, from a natural progression
over time. In ye olden dayes, we were able to branch pretty soon after
the RDO and Ubuntu packages stabilized. Now, due to time needed and
engagement, it's an informal poll of the developer team to see who
objects or sees something showstopping, then carrying on with creating
the stable branch and releasing the artifacts to Supermarket.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-29 Thread Samuel Cassiba
On Tue, May 29, 2018 at 4:26 PM, Ian Wells  wrote:

> On 29 May 2018 at 14:53, Jeremy Stanley  wrote:
>
>> On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote:
>> [...]
>> > Maybe it would be different now that I am a Core/PTL but in the past I
>> had
>> > been warned to be careful as it could be misinterpreted if I was
>> changing
>> > other people's patches or that it could look like I was trying to pad my
>> > numbers. (I am a nit-picker though I do my best not to be.
>> [...]
>>
>> Most stats tracking goes by the Gerrit "Owner" metadata or the Git
>> "Author" field, neither of which are modified in a typical new
>> patchset workflow and so carry over from the original patchset #1
>> (resetting Author requires creating a new commit from scratch or
>> passing extra options to git to reset it, while changing the Owner
>> needs a completely new Change-Id footer).
>>
>
> We know this, but other people don't, so the comment is wise.  Also,
> arguably, if I badly fix someone else's patch, I'm making them look bad by
> leaving them with the 'credit' for my bad work, so it's important to be
> careful and tactful.  But the history is public record, at least.
>
>
If the patch is bad enough where I have to step in to rewrite, I'm making
the submitter look bad no matter what. That makes everyone worse off.

Best,
Samuel


> --
> Ian.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-05-29 Thread Samuel Cassiba
t, for fear that it wind up some poor end-user's support
nightmare. Having quietly served as PTL for four cycles -- sometimes not as
quietly as others -- I've struggled with the notions of contributorship
versus maintainership. After this long at it, experience says a bunch of
well-intended contributors does not a maintained project make, unless their
heads can be in the right place (or wrong, depending on how salty you get
by reading this far) to consider it as such.

I really wish I had a good label for projects like openstack-chef, but
labels can be extremely caustic if misinterpreted, even applied with the
best of intentions. Things like 'needs-volunteers' come to mind, but that's
still casting things somewhat negatively, more akin to digital panhandling.
The end result should be a way of identifying the need for more investment
with a more positive inference in the public view, instead of the negative
connotations of 'low-activity'. Even 'maintenance-mode' paints negative
perceptions. Do YOU want to touch that janky, unmaintained stuff? Neither
do I.

To back down off my soapbox, the fact that projects are losing the
organizational diversity tag seems more a symptom of unwellness in what is
being measured, not necessarily irrelevance of the metric. Measuring in
terms of throughput and number of contributors is one thing, but the
outcome of the measure needs to feed back into better maintainership for
the overall health of OpenStack as a collection of open source projects.
Some of the destined 'low-activity' projects would do quite well with an
extra couple of part-timers if they aren't framed as being on the
proverbial junk pile.

Best,
Samuel Cassiba (scas)



> > --
> > Thierry Carrez (ttx)
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen: 4th Edition

2018-05-27 Thread Samuel Cassiba
HTML: https://s.cassiba.com/openstack/state-of-the-kitchen-4th-edition/

This is the fourth installment of what is going on with Chef OpenStack. The
aim
is to give a quick overview to see our progress and what is on the menu.
Feedback is always welcome on the efficacy of the content.

This edition will take a slightly different direction, as I am now
cross-posting
this to my blog to increase exposure and to get the content showing up on
OpenStack [Planet](http://planet.openstack.org). Going forward, this will be
formatted as Markdown.

### Announcements

* Queens release is nearing. Summit week slowed things down a little, but
we're
  looking to be in good shape.
* Kitchen scenarios are now pinned to Chef 14. While Chef 13 is supported
until
  Chef 15 release (April 2019 timeframe), master is not currently developing
  against it. All changes are currently still gated against Chef 13, so we
have
  test coverage of both supported Chef major releases.
* ChefDK 3 has been released. Testing has not commenced with it, but
patches are
  always welcome if you're impatient.

### Documentation

* [Contributor and install guides](https://review.openstack.org/569571)
have been written to replace the
  ever-aging documentation in openstack-chef-repo.
* A more comprehensive deploy guide is beginning to take shape.

### Integration

* The mass deprecation of Rakefiles is still looking to be possible. The
  functionality from openstack-chef-repo/Rakefile will have to be
retrofitted
  into Zuul jobs to get gating jobs for the supported platforms.
* Chef Delivery support has made it to the cookbooks. It is currently used
in
  local testing, but will be making it to the gate soon.

### Containers

* Dokken works-ish. Yes, ish. Though, not for lack of trying. RDO has
issues in
  networking due to iptables.
* All-in-one is the current focus, with
  [clean builds](https://review.openstack.org/566440) using UCA packages.

### Upgrades

* No updates this month.

### On The Menu

*Chicken Cordon Bleu Casserole* (makes 8-10 portions)

* 1500g chicken, cubed in 1" pieces
* 300g ham steak, cubed in 0.5" pieces
* 300g Swiss cheese
* 230ml Heavy Whipping Cream
* 230ml cream cheese / Neufchatel
* To taste: salt, pepper, garlic powder

 Instructions

1. Cook whole pieces of chicken most of the way through so it isn't tough
and rubbery. A little
   pink here is a good thing - it will finish in the oven. Slice into
roughly 1" cubes.
2. Line the bottom of the pan with chicken cubes
3. Sprinkle salt, pepper and garlic powder (sorry non-US folks) over the
chicken
4. Sprinkle ham cubes on top of the chicken
5. Shred Swiss cheese and spread over the mixture
6. Heat the cream cheese in the microwave, add the cream and mix. Pour
mixture
   over the casserole.
7. Mix ingredients until incorporated. Overmixing will give a more pate-like
   texture.
8. Bake @ 350F / 176C for 40 minutes.

Your humble line cook,
Samuel Cassiba (scas)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] cannot configure host kernel-args for pci passthrough with first-boot

2018-05-22 Thread Samuel Monderer
Hi,

We found the cause of the problem.
We forgot the following in the first-boot.yaml
outputs: # This means get_resource from the parent template will get the
userdata, see: #
http://docs.openstack.org/developer/heat/template_guide/composition.html#making-your-template-resource-more-transparent
# Note this is new-for-kilo, an alternative is returning a value then using
# get_attr in the parent template instead. OS::stack_id: value:
{get_resource: userdata}

Samuel

On Tue, May 22, 2018 at 8:05 AM Saravanan KR <skram...@redhat.com> wrote:

> Could you check the log in the /var/log/cloud-init-output.log file to
> see what are the first-boot scripts which are executed on the node?
> Add "set -x" in the kernel-args.sh file to better logs.
>
> Regards,
> Saravanan KR
>
> On Tue, May 22, 2018 at 12:49 AM, Samuel Monderer
> <smonde...@vasonanetworks.com> wrote:
> > Hi,
> >
> > I'm trying to build a new OS environment with RHOSP 11 with a compute has
> > that has GPU card.
> > I've added a new role and a firstboot template to configure the kernel
> args
> > to allow pci-passthrough.
> > For some reason the firstboot is not working (can't see the changes on
> the
> > compute node)
> > Attached are the templates I used to deploy the environment.
> >
> > I used the same configuration I used for a compute role with sr-iov and
> it
> > worked there.
> > Could someone tell me what I missed?
> >
> > Regards,
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen - 3rd Edition

2018-04-20 Thread Samuel Cassiba
This is the third installment of what is going on in Chef OpenStack. The
goal is to give a quick overview to see our progress and what is on the
menu. Feedback is always welcome on the usefulness of the content.

Appetizers
==
=> Chef 14 support has arrived in the cookbooks. Test Kitchen will be
updated to 14 Soon(tm). The gate is still testing against 13. The 12
release is considered EOL as of May 1, 2018, so we will not be able to
support releases older than 13 at that time.
https://blog.chef.io/2018/04/19/whats-new-in-chef-14-and-chefdk-3/
=> Numerous community cookbooks received updates, the highest visibility
being Poise itself. This resolves issues with installing pip 10 on both
platforms, and system Python on RHEL.

Entrees
===
=> Installing Python has been centralized to the common cookbook, as
opposed to multiple attempts to install the same Python instance. This
produces a more consistent, repeatable outcome.
=> The dokken yaml has been fixed up to allow for testing in containers
once more.
=> Work has begun on overhauling the aging documentation, in an attempt to
align things closer to community standards. Parts are shamelessly inspired
from other projects (Puppet OpenStack, OpenStack-Ansible), so it will look
a bit familiar in some places.

Desserts

=> Rakefiles are going away! As tooling has matured, and the emergence of
the ChefDK, the functionality of what the reliable Rakefiles provide are
being replaced with tools such as Test Kitchen and Delivery.

On The Menu
===
=> Creamy Jalapeno Sauce
-- 1 cup (170g) sour cream / creme fraiche
-- 1 cup (170g) mayonnaise
-- 5 tbsp (75g) dry Ranch dressing powder
-- 2 tbsp (28g) dry Jalapeno powder
-- 4-5 pickled jalapeno chiles, with the stem removed (use some of the
pickling juice to thin things out if the consistency is too thick)
-- 1/2 cup (64g) fresh picked cilantro (dry works here, but... dry)
-- 1/2 cup (64g) salsa verde
-- 2 tbsp (28g) lime juice
-- (Optional) Heavy cream / double cream if the consistency is too thin

Add ingredients to a blender or food processor. Blend until desired
consistency, or until you do not see pieces of jalapeno.

Your humble line cook,
Samuel Cassiba (scas)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen - 2nd Edition

2018-03-16 Thread Samuel Cassiba
This is the second edition of what is going on in Chef OpenStack. The
goal is to give a quick overview to see our progress and what is on
the menu. Feedback is always welcome, as this is an iterative thing.

Appetizers

=> Pike has been branched! Supermarket has also received a round of
updates. https://supermarket.chef.io/users/openstack
=> chef-client 13.8 has been released, allowing the scenarios to
continue tracking the latest 13 series.
https://discourse.chef.io/t/chef-client-13-8-released/12652

Entrees
==
=> Queens development has commenced. Preliminary lab testing has
yielded positive results in Test Kitchen. Most changes seem to revolve
around deprecation chasing. https://review.openstack.org/550963 &
https://review.openstack.org/#/q/status:open+topic:queens_updates
=> Nova is continuing the trend of operating as an Apache web service.
https://review.openstack.org/552299

Desserts
===
=> The client (fog wrapper) and dns (Designate) cookbooks will be
coming home after stabilizing in Pike.
=> Chef 14 and ChefDK 3 is a thing next month. A heads-up will be sent
to this ML before this enters the gate.
https://blog.chef.io/2018/02/16/preparing-for-chef-14-and-chef-12-end-of-life/
=> More to come with upgrades. Stay tuned for specs and patches.

On The Menu
===
=> Buffalo Chicken Dip
-- 3-4 raw chicken breasts (flash-frozen gives a slightly different
mouth feel. it still makes food, so, you do you, boo)
-- 8 ounces (226g) cream cheese / Neufchatel
-- 1 cup (128g) hot sauce (Frank's RedHot recommended. substitute for
your own preferred pepper sauce)
-- 1 ounce (28g) dry ranch seasoning (substitute for store-bought
powder, or salad dressing from a bottle, if you must - ranch or bleu
cheese works here)
-- 4 ounces (113g) butter (grass-fed recommended because delicious)
Optional:
-- 4 slices cooked and crumbled (streaky) bacon
-- Cheese (shredded or cubed for melting consistency)

Add the chicken to a slowcooker in a single layer, if you have room.
Add hot sauce, butter, ranch right on top of the chicken. Cook on high
for 4 hours. Remove heat, drain juices, reserving juices. Shred
chicken. Add cream cheese, incorporate thoroughly. Reincorporate the
juices, gradually and thoroughly, taking care not to obliterate the
chicken, unless you like tangy, cheesy chicken mash. Serve as an
appetizer, or dig in with a fork.

Your humble cook,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Pike cookbooks released

2018-03-01 Thread Samuel Cassiba
Ohai!

The Chef OpenStack team is excited to announce that the 16.0 release
of the cookbooks is fresh out of the oven! This corresponds with the
Pike release of OpenStack. The cookbooks have been published to
Supermarket under the OpenStack namespace located at
https://supermarket.chef.io/users/openstack

The following cookbooks received updates with this release:

- openstack-block-storage
- openstack-common
- openstack-compute
- openstack-dashboard
- openstack-identity
- openstack-image
- openstack-integration-test
- openstack-network
- openstack-ops-database
- openstack-ops-messaging
- openstack-orchestration
- openstack-telemetry

In this release, we also leverage the following external cookbooks
that were updated in tandem:
- openstackclient
- openstack-dns (Designate)

The main focus of the release has been cookbook stabilization and
improvement of functional testing. Local testing has been overhauled
in favor of Test Kitchen (https://kitchen.ci) and InSpec
(https://www.inspec.io/), which provides a more consistent interface.
The RDBMS flavor has also changed to MariaDB, dropping MySQL from the
tested scenarios.

This also marks the first release developed and tested on Chef 13,
with Chef 12 now being unsupported in master. If you need to use an
older release of OpenStack with Chef 13, this will give you a
blueprint for what needs to be backported.

Prost!

Your humble cook,
Samuel Cassiba (sc` / scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen - 1st Edition

2018-02-16 Thread Samuel Cassiba
This is the first edition of what is going on in Chef OpenStack. The
goal is to give a quick overview to see our progress and what is on
the menu.

Appetizers

=> Focus is on branching stable/pike and releasing Pike to Supermarket
before the end of February if possible.
=> Tempest will continue to focus on deploying from git instead of
packages. This provides a more consistent outcome.
=> Designate cookbook works with Pike and Queens in Ubuntu. CentOS is WIP.
=> A deploy guide on using Chef OpenStack in various scenarios is
being formulated. Any help here is welcome, even a rubber duck.

Entrees
==
=> Chef 13 has landed in master (encompassing a staggering 2+ years of
deprecations)
- https://review.openstack.org/#/q/topic:bp/modern-chef
=> Test Kitchen is in openstack-chef-repo, with allinone, basic
multinode and container-based scenarios.
- https://git.openstack.org/cgit/openstack/openstack-chef-repo/
=> MariaDB is being sourced from mariadb.org for consistency in outcome.

Desserts
===
=> Rakefiles are going away in favor of delivery local in Queens.
- https://docs.chef.io/delivery_cli.html
=> Test Kitchen will become the focal point of CI, once we get the
right power adapter for Ansible.
=> Upgrades! Upgr... you get the idea. :-)

What's Cooking?
=
=> A Bowl of Red
measurements are geared for Americans, metric is approximate. adjust
where appropriate.
-- 4 lbs (1800 g) coarse ground beef
-- 1/4 cup (60 ml) beef stock for added flavor and moisture
-- 1 oz (28 g) chili powder (without salt, to control salinity)
-- 4 or 5 chipotle chiles, minced, with adobo sauce, to taste
-- 1 29 oz can (857 ml) of tomato sauce
-- 1 tsp (4.7 g) each: kosher salt, ground black and white peppercorns
-- 1 tbsp (14.3 g) each:
--- onion powder
--- paprika
--- ground cumin
--- ground cayenne
--- ground jalapeño
-- 1 box baby wipes, any brand

Add ingredients to slowcooker, breaking up the meat as you add it.
Cook on high for 4 hours, or until the aroma of cumin takes you. Serve
straight up, or with shredded cheese and sour cream to tame the heat.
Apply baby wipes when appropriate. Gets hotter overnight.

Your humble cook,
Samuel Cassiba (sc` / scas)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef][ptl] PTL candidacy for Rocky

2018-02-02 Thread Samuel Cassiba
Ohai!

I am seeking to continue as PTL for Chef OpenStack, also known as
openstack-chef.

The tl;dr of my candidacy, which can be read at
https://review.openstack.org/539211 would be:
- The cookbooks are getting better code-wise, but we're not in a good
place people-wise to facilitate handing over the reins just yet.
- CI and pipelines are a focus of this cycle, to aid in delivering
code changes and project visibility.
- For a codebase as complex as openstack-chef, to keep it out of
irrelevance, the barrier to delivering change must be lowered
immensely.

In the last cycle, in addition to delivering Chef 13 support to the
cookbooks (2+ years worth of deprecations!), I successfully negotiated
a delicate, downright awkward, trademark issue on behalf of OpenStack.
The outcome of this was to further increase the visibility of
OpenStack's output in the open source community. The openstack-chef
community also introduced Test Kitchen and InSpec support to the
cookbooks, which enables us to further close the gap between CI and
local testing.

As always, openstack-chef need more reviewers and developers, but
testers especially. Without a consistent feedback loop, the codebase
starts to exist in a quasi-vacuum. As our pace typically keeps us a
release behind, the loop doesn't really close until the "self-LTS"
deployers of OpenStack look to the next release. Without someone to
keep things moving forward, progress stagnates, and, eventually, even
the stalwarts look elsewhere for an upstream.

Thank you for reading.

Delightfully,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Heads up - Chef 13 is incoming

2017-11-26 Thread Samuel Cassiba
Ohai!

With Chef 12 EOL approaching, Pike was originally intended to be the
final release exclusively supporting Chef 12, with Chef 13 being
deferred to Queens development. However, Chef 13 support is here at
long last. Chef 12 clients should be able to work using the same
cookbooks for now, as in, it works in Test Kitchen, but YMMV past
Pike.

Airgapped deployers need especially take notice, as MariaDB packages
are now coming from mariadb.org and not the distro repos. Software
Collections also comes configured out of the box for CentOS.

If you run into any problems, #openstack-chef is always open. Just
keep in mind that the team is very distributed, so park a client for a
while.

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Composable role OVS-DPDK compute node with single NIC

2017-11-21 Thread Samuel Monderer
http://paste.openstack.org/show/626557/

On Tue, Nov 21, 2017 at 8:22 PM Ben Nemec <openst...@nemebean.com> wrote:

> Your configuration lost all of its indentation, which makes it extremely
> difficult to read.  Can you try sending it a different way, maybe
> paste.openstack.org?
>
> On 11/16/2017 02:43 AM, Samuel Monderer wrote:
> > Hi,
> >
> > I managed to deploy a compute node with ovs-dpdk using two NICs. The
> > first for the provisioning network and control plane, the other NIC is
> > used tenant network over ovs-dpdk.
> >
> > I then tried to use only a single nic for provisioning and ovs-dpdk.
> > I used the nic configuration below for the compute nodes running
> > ovs-dpdk but encountered two problems.
> > First the tenant network was working (wasn't able to get DHCP running
> > and even when I manually configured it wasn't able to reach the router)
> > Second the default route on control plane is not set even though it is
> > configured in /etc/sysconfig/network-scripts/route-br-ex
> >
> > Samuel
> >
> > OsNetConfigImpl:
> > type: OS::Heat::StructuredConfig
> > properties:
> > group: os-apply-config
> > config:
> > os_net_config:
> > network_config:
> > -
> > type: ovs_user_bridge
> > name: {get_input: bridge_name}
> > use_dhcp: false
> > dns_servers: {get_param: DnsServers}
> > addresses:
> > -
> > ip_netmask:
> > list_join:
> > - '/'
> > - - {get_param: ControlPlaneIp}
> > - {get_param: ControlPlaneSubnetCidr}
> > routes:
> > -
> > ip_netmask: 169.254.169.254/32 <http://169.254.169.254/32>
> > next_hop: {get_param: EC2MetadataIp}
> > -
> > default: true
> > next_hop: {get_param: ControlPlaneDefaultRoute}
> > members:
> > -
> > type: ovs_dpdk_port
> > name: dpdk0
> > members:
> > -
> > type: interface
> > name: nic1
> > -
> > type: vlan
> > vlan_id: {get_param: InternalApiNetworkVlanID}
> > addresses:
> > -
> > ip_netmask: {get_param: InternalApiIpSubnet}
> > -
> > type: vlan
> > vlan_id: {get_param: TenantNetworkVlanID}
> > addresses:
> > -
> > ip_netmask: {get_param: TenantIpSubnet}
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Composable role OVS-DPDK compute node with single NIC

2017-11-16 Thread Samuel Monderer
Hi,

I managed to deploy a compute node with ovs-dpdk using two NICs. The first
for the provisioning network and control plane, the other NIC is used
tenant network over ovs-dpdk.

I then tried to use only a single nic for provisioning and ovs-dpdk.
I used the nic configuration below for the compute nodes running ovs-dpdk
but encountered two problems.
First the tenant network was working (wasn't able to get DHCP running and
even when I manually configured it wasn't able to reach the router)
Second the default route on control plane is not set even though it is
configured in /etc/sysconfig/network-scripts/route-br-ex

Samuel

OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: ovs_user_bridge
name: {get_input: bridge_name}
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
default: true
next_hop: {get_param: ControlPlaneDefaultRoute}
members:
-
type: ovs_dpdk_port
name: dpdk0
members:
-
type: interface
name: nic1
-
type: vlan
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Samuel Cassiba
On Tue, Nov 14, 2017 at 11:28 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:
> On 11/14/2017 05:08 PM, Bogdan Dobrelya wrote:
>>>>
>>>> The concept, in general, is to create a new set of cores from these
>>>> groups, and use 3rd party CI to validate patches. There are lots of
>>>> details to be worked out yet, but our amazing UC (User Committee) will
>>>> be begin working out the details.
>>>
>>>
>>> What is the most worrying is the exact "take over" process. Does it mean
>>> that the teams will give away the +2 power to a different team? Or will our
>>> (small) stable teams still be responsible for landing changes? If so, will
>>> they have to learn how to debug 3rd party CI jobs?
>>>
>>> Generally, I'm scared of both overloading the teams and losing the
>>> control over quality at the same time :) Probably the final proposal will
>>> clarify it..
>>
>>
>> The quality of backported fixes is expected to be a direct (and only?)
>> interest of those new teams of new cores, coming from users and operators
>> and vendors.
>
>
> I'm not assuming bad intentions, not at all. But there is a lot of involved
> in a decision whether to make a backport or not. Will these people be able
> to evaluate a risk of each patch? Do they have enough context on how that
> release was implemented and what can break? Do they understand why feature
> backports are bad? Why they should not skip (supported) releases when
> backporting?
>
> I know a lot of very reasonable people who do not understand the things
> above really well.
>

I think there is more of a general "yes, but..." feel and not so much
a misunderstanding or lack of understanding entirely. With my operator
and PTL hats on, I'm in favor of a release cadence that is favorable
for the *people* involved. It's already proven that the current model
is broken or lacking in some way, simply by having these
conversations. With the status quo, it's almost a death march from one
release to the next, but nobody really wants to prolong that pain
because this topic comes up again and again.

Ideally, contributors are empowered enough to pick up the reins and
deliver the changes themselves, and some are, but it's pretty damned
daunting from the outside. The new contributors who want to contribute
but don't see the way in, probably because we haven't said mellon, are
left scratching their heads and eventually deem OpenStack as Not
Ready. It's almost like a perception exists that being able to even
submit a one-line patch is a gate to admittance. Unfortunately, less
and less are willing to pay that toll, no matter how nice the project
is on the other side.

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-10 Thread Samuel Cassiba
On Fri, Nov 10, 2017 at 2:51 PM, John Dickinson  wrote:
> What I heard from ops in the room is that they want (to start) one release a
> year who's branch isn't deleted after a year. What if that's exactly what we
> did? I propose that OpenStack only do one release a year instead of two. We
> still keep N-2 stable releases around. We still do backports to all open
> stable branches. We still do all the things we're doing now, we just do it
> once a year instead of twice.
>

This seems like a much more reasonable proposal with less of a musical
chairs feeling. The spun up software developer in my basement nods in
violent agreement with the idea, and the tortured QA engineer I keep
locked up out back would love nothing more than some extra time to
test.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-08 Thread Samuel Cassiba
On Wed, Nov 8, 2017 at 11:17 AM, Doug Hellmann <d...@doughellmann.com> wrote:
> Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
>> On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
>> <emccorm...@cirrusseven.com> wrote:
>> > Hello Ops folks,
>> >
>> > This morning at the Sydney Summit we had a very well attended and very
>> > productive session about how to go about keeping a selection of past
>> > releases available and maintained for a longer period of time (LTS).
>> >
>> > There was agreement in the room that this could be accomplished by
>> > moving the responsibility for those releases from the Stable Branch
>> > team down to those who are already creating and testing patches for
>> > old releases: The distros, deployers, and operators.
>> >
>> > The concept, in general, is to create a new set of cores from these
>> > groups, and use 3rd party CI to validate patches. There are lots of
>> > details to be worked out yet, but our amazing UC (User Committee) will
>> > be begin working out the details.
>> >
>> > Please take a look at the Etherpad from the session if you'd like to
>> > see the details. More importantly, if you would like to contribute to
>> > this effort, please add your name to the list starting on line 133.
>> >
>> > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
>> >
>> > Thanks to everyone who participated!
>> >
>> > Cheers,
>> > Erik
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> In advance, pardon the defensive tone. I was not in a position to
>> attend, or even be in Sydney. However, as this comes across the ML, I
>> can't help but get the impression this effort would be forcing more
>> work on already stretched teams, ie. deployment-focused development
>> teams already under a crunch as contributor count continues to decline
>> in favor of other projects inside and out of OpenStack.
>>
>> As a friendly reminder, Chef is still actively developed, though we've
>> not had a great return from recruiting more people. We have about 3.5
>> active developers, including active cores, and non-cores who felt it
>> worthwhile to contribute back upstream. There is no major corporate
>> backer here, but merely a handful of potentially stubborn volunteers.
>> Nobody is behind the curtain, but Chef OpenStack still have a few
>> active users (once again, I point to the annual User Survey results)
>> and contributors. However, we do not use the MLs as a primary
>> communication means, so I can see how we might be forgotten or
>> ignored.
>>
>> In practice, no one likes talking about Chef OpenStack that I've
>> experienced, neither in the Chef or OpenStack communities. However, as
>> a maintainer, I keep making it a point to bring it up when it seems
>> the project gets papered over, or the core team gets signed up for
>> more work decided in a room half a world away. Admittedly, the whole
>> deployment method is a hard sell if you're not using Chef in some way.
>> It has always been my takeaway that the project was merely tolerated
>> under the OpenStack designation, neither embraced nor even liked, even
>> being the "official" OpenStack deployment method for a major
>> deployment toolset. The Foundation's support has been outstanding when
>> we've needed it, but that's about as far as the delightful goes. The
>> Chef community is a bit more tolerant of someone using the Chef
>> moniker for OpenStack, but migrating from Gerrit to GitHub is a major
>> undertaking that the development team may or may not be able to
>> reasonably support without more volunteers. Now that the proposition
>> exists about making a Stable Release liaison derived from existing
>> cores, I can't help but get the impression that, for active-but-quiet
>> projects, it'll be yet another PTL responsibility to keep up with, in
>> addition to the rigors that already come with the role. I'm hoping
>> I'll be proven wrong here, but I can and do get in trouble for hoping.
>>
>
> There are still a lot of details to work out, so the announcement
> of an "agreement" is a bit premature. Rest assured, however, that
> the proposed change is not about "requiring," or eve

Re: [openstack-dev] Upstream LTS Releases

2017-11-08 Thread Samuel Cassiba
On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
<emccorm...@cirrusseven.com> wrote:
> Hello Ops folks,
>
> This morning at the Sydney Summit we had a very well attended and very
> productive session about how to go about keeping a selection of past
> releases available and maintained for a longer period of time (LTS).
>
> There was agreement in the room that this could be accomplished by
> moving the responsibility for those releases from the Stable Branch
> team down to those who are already creating and testing patches for
> old releases: The distros, deployers, and operators.
>
> The concept, in general, is to create a new set of cores from these
> groups, and use 3rd party CI to validate patches. There are lots of
> details to be worked out yet, but our amazing UC (User Committee) will
> be begin working out the details.
>
> Please take a look at the Etherpad from the session if you'd like to
> see the details. More importantly, if you would like to contribute to
> this effort, please add your name to the list starting on line 133.
>
> https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
>
> Thanks to everyone who participated!
>
> Cheers,
> Erik
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

In advance, pardon the defensive tone. I was not in a position to
attend, or even be in Sydney. However, as this comes across the ML, I
can't help but get the impression this effort would be forcing more
work on already stretched teams, ie. deployment-focused development
teams already under a crunch as contributor count continues to decline
in favor of other projects inside and out of OpenStack.

As a friendly reminder, Chef is still actively developed, though we've
not had a great return from recruiting more people. We have about 3.5
active developers, including active cores, and non-cores who felt it
worthwhile to contribute back upstream. There is no major corporate
backer here, but merely a handful of potentially stubborn volunteers.
Nobody is behind the curtain, but Chef OpenStack still have a few
active users (once again, I point to the annual User Survey results)
and contributors. However, we do not use the MLs as a primary
communication means, so I can see how we might be forgotten or
ignored.

In practice, no one likes talking about Chef OpenStack that I've
experienced, neither in the Chef or OpenStack communities. However, as
a maintainer, I keep making it a point to bring it up when it seems
the project gets papered over, or the core team gets signed up for
more work decided in a room half a world away. Admittedly, the whole
deployment method is a hard sell if you're not using Chef in some way.
It has always been my takeaway that the project was merely tolerated
under the OpenStack designation, neither embraced nor even liked, even
being the "official" OpenStack deployment method for a major
deployment toolset. The Foundation's support has been outstanding when
we've needed it, but that's about as far as the delightful goes. The
Chef community is a bit more tolerant of someone using the Chef
moniker for OpenStack, but migrating from Gerrit to GitHub is a major
undertaking that the development team may or may not be able to
reasonably support without more volunteers. Now that the proposition
exists about making a Stable Release liaison derived from existing
cores, I can't help but get the impression that, for active-but-quiet
projects, it'll be yet another PTL responsibility to keep up with, in
addition to the rigors that already come with the role. I'm hoping
I'll be proven wrong here, but I can and do get in trouble for hoping.

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Samuel Cassiba
1, Jonathan Proulx <j...@csail.mit.edu> wrote:

On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:

:OpenStack is big. Big enough that a user will likely be fine with  
learning

:a new set of tools to manage it.

New users in the startup sense of new, probably.

People with entrenched environments, I doubt it.

But OpenStack is big. Big enough I think all the major config systems
are fairly well represented, so whether I'm right or wrong this
doesn't seem like an issue to me :)

Having common targets (constellations, reference architectures,
whatever) so all the config systems build the same things (or a subset
or superset of the same things) seems like it would have benefits all
around.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best,

Samuel Cassiba


signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Samuel Cassiba

> On Sep 25, 2017, at 22:44, Adam Lawson <alaw...@aqorn.com> wrote:
> 
> Hey Jay,
> I think a GUI with a default config is a good start. Much would need to 
> happen to enable that of course but that's where my mind goes. Any talk about 
> 'default' kind of infringes on what we've all strived to embrace; a cloud 
> architecture without bakes in assumptions. A default-anything need not mean 
> other options are not available - only that a default gets them started. I 
> would never ever agree to a default that consists of KVM+Contrail+NetApp. 
> Something neutral would be great- easier said than done of course.
> 
> Samuel,
> Default configuration as I envision it != "Promoting a single solution". I 
> really hope a working default install would allow new users to get started 
> with OpeStack without promoting anything. OpenStack lacking a default install 
> results in an unfriendly deployment exercise. I know for a fact the entire 
> community at webhostingtalk.com ignores OS for the most part because of how 
> hard it is to deploy. They use Fuel or other third-party solutions because we 
> as a OS community continue to fail to acknowledge the importance of an easier 
> of implementation. Imagine thousands of hosting providers deploying OpenStack 
> because we made it easy. That is money in the bank IMHO. I totally get the 
> thinking about avoiding the term default for the reasons you provided but 
> giving users a starting point does not necessarily mean we're trying to get 
> them to adopt that as their final design. Giving them a starting point must 
> take precedence over not giving them any starting point.
> 

I’ll pick on my own second job for a moment, Chef. We have an amazing single 
node deployment strategy, and we have a so-so multinode deployment strategy for 
the simple fact that the orchestration story for every configuration management 
flavor equates to a dumpster fire in the middle of a tire fire. Let me be clear 
up front: I say ‘we’ a lot, but in many cases, the ‘we’ comes down to really 
just me. Not to discredit my teammates, I sleep a _lot_ less.

I've said it in the past, but Chef consist of nothing but part-timers with much 
more pressing issues at $dayJob[0]. If the README.md doesn’t get updated, it’s 
because none of us have the time to dedicate to evangelism. We talked about 
spreading the word back when we were still having IRC meetings, but it all 
boiled down to E_NOTIME.

As time has gone on, the roles in the Chef OpenStack project have been changing 
from less facilitator to more circus barker. It’s coming down to almost begging 
people for feedback, if we can find them. What I can do is provide a means to 
get to OpenStack about 80-90% of the way, provided the consumer can grok the 
tooling, key phrase. That said, we don’t teach people to use Chef, merely how 
one might OpenStack with it should they choose to kick the tires. The problem 
is, those potential downstream consumers, for some reason or other, don’t file 
bugs or even communicate back with the maintainers to get an idea if their 
problem would/could be addressed. They just move on, sight unseen and a bit 
grumpier. I can’t change that by doing more work.

If I shift gears to working on an installation method abstracted behind a GUI, 
am I now expected to bring in bits of Xorg simply so I can run that installer 
from my remote systems? Are your security people okay with Xorg on servers? 
Will the bootstrapping now take place entirely from a laptop/workstation, 
outright ignoring existing development workflows and pipelines? Who’s writing 
this code? Is there a GitHub repo where I can start testing this pièce de 
résistance?

If you’ll excuse the morning snark and “poisonous” words, as you put it a few 
days ago, I don’t necessarily see how bundling the install process into a 
graphical installer would help. If anything, it might prove more distraction 
than it’s worth because now there have to be graphical installer experts within 
whatever team(s) may be doing this effort.

Maybe it’s because I’ve been using Chef, the tool, for as long as I have, but 
it isn’t exactly a mash of random, disparate tooling that we’re using over 
here. We use community-standard tooling bundled in the ChefDK for the basic 
building blocks, even to our detriment at times. For integration testing, we 
used chef-provisioning until it rotted away, now being replaced by test-kitchen 
and InSpec. If anything, we were the ones lagging behind because we number so 
few and are beholden to E_NOTIME. Is there a knowledge barrier to entry? Sure 
is, and you do have to be this tall to ride. Those that do find the IRC channel 
and stick around long enough for one of us to respond generally get the 
assistance they need, but we’re not omnipresent.

As an operator in the deployment space, my whole point of contributing back is 
to make things less c

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-25 Thread Samuel Cassiba

> On Sep 25, 2017, at 16:52, Clint Byrum  wrote:
> 
> Excerpts from Jonathan D. Proulx's message of 2017-09-25 11:18:51 -0400:
>> On Sat, Sep 23, 2017 at 12:05:38AM -0700, Adam Lawson wrote:
>> 
>> :Lastly, I do think GUI's make deployments easier and because of that, I
>> :feel they're critical. There is more than one vendor whose built and
>> :distributes a free GUI to ease OpenStack deployment and management. That's
>> :a good start but those are the opinions of a specific vendor - not he OS
>> :community. I have always been a big believer in a default cloud
>> :configuration to ease the shock of having so many options for everything. I
>> :have a feeling however our commercial community will struggle with
>> :accepting any method/project other than their own as being part a default
>> :config. That will be a tough one to crack.
>> 
>> Different people have differnt needs, so this is not meant to
>> contradict Adam.
>> 
>> But :)
>> 
>> Any unique deployment tool would be of no value to me as OpenStack (or
>> anyother infrastructure component) needs to fit into my environment.
>> I'm not going to adopt something new that requires a new parallel
>> management tool to what I use.
>> 
> 
> You already have that if you run OpenStack.
> 
> The majority of development testing and gate testing happens via
> Devstack. A parallel management tool to what most people use to actually
> operate OpenStack.
> 
>> I think focusing on the existing configuration management projects it
>> the way to go. Getting Ansible/Puppet/Chef/etc.. to support a well
>> know set of "constellations" in an opinionated would make deployment
>> easy (for most people who are using one of those already) and ,
>> ussuming the opionions are the same :) make consumption easier as
>> well.
>> 
>> As an example when I started using OpenStack (Essex) we  had recently
>> switch to Ubuntu as our Linux platform and Pupept as our config
>> management. Ubuntu had a "one click MAAS install of OpenStack" which
>> was impossible as it made all sorts of assumptions about our
>> environment and wanted controll of most of them so it could provide a
>> full deployemnt solution.  Puppet had a good integrated example config
>> where I plugged in some local choices and and used existing deploy
>> methodologies.
>> 
>> I fought with MAAS's "simple" install for a week.  When I gave up and
>> went with Puppet I had live users on a substantial (for the time)
>> cloud in less htan 2 days.
>> 
>> I don't think this has to do with the relative value of MASS and
>> Puppet at the time, but rather what fit my existing deploy workflows.
>> 
>> Supporting multiple config tools may not be simple from an upstream
>> perspective, but we do already have these projects and it is simpler
>> to consume for brown field deployers at least.
>> 
> 
> I don't think anybody is saying we would slam the door in the face of
> people who use any one set of tools.
> 
> But rather, we'd start promoting and using a single solution for the bulk
> of community efforts. Right now we do that with devstack as a reference
> implementation that nobody should use for anything but dev/test. But
> it would seem like a good idea for us to promote a tool for going live
> as well.

Except by that very statement, you slam the door in the face of tons of existing
knowledge within organizations. This slope has a sheer face.

Promoting a single solution would do as much harm as it would good, for all it’s
worth. In such a scenario, the most advocated method would become the only
understood method, in spite of all other deployment efforts. Each project that
did not have the most mindshare would become more irrelevant than they are now
and further slip into decay. For those that did not have the fortune or
foresight to land on this hypothetical winning side, what for their efforts,
evolve or gtfo?

I'm not saying Fuel or Salt or Chef or Puppet or Ansible needs to be the
'winner', because there isn't a competition, at least in my opinion. The way I
see it, we're all working to get to the same place. Our downstream consumers
don’t really care how that happens in the grand scheme, only that it does.

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-13 Thread Samuel Cassiba
s monolithic architecture)

No, please, just... no. A monolithic architecture is fine for dev, but
it falls apart prematurely in the lifecycle when you throw the spurs
to it.

> 3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
> Networks data which is heavily connected.

That's for the implementation phase, not development. You can put
volume storage and VMs on the same machine, if you want/need to do so.
This smells like... another use case!

>
>
> 4) Don't be afraid to break things
> Maybe it's time for OpenStack 2:

Blue polka dots with green stripes! With a racing stripe! And a
whipped pony on top.

>
> In any case most of people provide API on top of OpenStack for usage
> In any case there is no standard and easy way to upgrade
>
> So basically we are not losing anything even if we do not backward
> compatible changes and rethink completely architecture and API.

Quis custodiet ipsos custodes? Who ensures the usage APIs align with
the service APIs align with the architecture? What happens when one
group responsible for one API doesn't talk to the other because their
employers changed directions? I'm not convinced an "incremental" all
the things approach can benefit anyone, particularly one that demands
more of people.

>
>
> I know this sounds like science fiction, but I believe community will
> appreciate steps in this direction...

I'm going to invoke PHK here and show my roots: *ahem* Quality happens
only when someone is responsible for it. A dramatic sweeping change
from one extreme to the other is just being along for the ride when
the pendulum swings. It's not time to throw in the towel on OpenStack
quite yet. We're all looking for an agreeable positive outcome that
will benefit all of our employers and their customers, but it doesn't
work to profess a Grand Unified Way when there needn't necessarily be
one. I thought there needed to be one, on my flight back from Boston.

Then I ran for PTL a third cycle. :)

>
>
> Best regards,
> Boris Pavlovic
>
> On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thin...@gmail.com> wrote:
>>
>> Hey all,
>>
>> The session is over. I’m hanging near registration if anyone wants to
>> discuss things. Shout out to John for coming by on discussions with
>> simplifying dependencies. I welcome more packagers to join the
>> discussion.
>>
>> https://etherpad.openstack.org/p/simplifying-os
>>
>> —
>> Mike Perez
>>
>>
>> On September 12, 2017 at 11:45:05, Mike Perez (thin...@gmail.com) wrote:
>> > Hey all,
>> >
>> > Back in a joint meeting with the TC, UC, Foundation and The Board it was
>> > decided as an area
>> > of OpenStack to focus was Simplifying OpenStack. This intentionally was
>> > very broad
>> > so the community can kick start the conversation and help tackle some
>> > broad feedback
>> > we get.
>> >
>> > Unfortunately yesterday there was a low turn out in the Simplification
>> > room. A group
>> > of people from the Swift team, Kevin Fox and Swimingly were nice enough
>> > to start the conversation
>> > and give some feedback. You can see our initial ether pad work here:
>> >
>> > https://etherpad.openstack.org/p/simplifying-os
>> >
>> > There are efforts happening everyday helping with this goal, and our
>> > team has made some
>> > documented improvements that can be found in our report to the board
>> > within the ether
>> > pad. I would like to take a step back with this opportunity to have in
>> > person discussions
>> > for us to identify what are the area of simplifying that are worthwhile.
>> > I’m taking a break
>> > from the room at the moment for lunch, but I encourage people at 13:30
>> > local time to meet
>> > at the simplification room level b in the big thompson room. Thank you!
>> >
>> > —
>> > Mike Perez
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] stable/ocata has been released

2017-08-18 Thread Samuel Cassiba
Ohai!

We released the stable/ocata branch of the cookbooks today. For this
release, we included updates to the following repos:

cookbook-openstack-block-storage (Cinder)
cookbook-openstack-common (shared configuration and libraries)
cookbook-openstack-compute (Nova)
cookbook-openstack-dashboard (Horizon)
cookbook-openstack-identity (Keystone)
cookbook-openstack-image (Glance)
cookbook-openstack-integration-test (Tempest)
cookbook-openstack-network (Neutron)
cookbook-openstack-ops-database (shared database (MySQL/MariaDB))
cookbook-openstack-ops-messaging (shared MQ service)
cookbook-openstack-orchestration (Heat)
cookbook-openstack-telemetry (Ceilometer and Gnocchi)
openstack-chef-repo (a functioning example monolithic repo combining
all cookbooks and deployment-specific configuration)

Regrettably, we still were not able to offer a release of the Murano
cookbook (cookbook-openstack-application-catalog) due to timing in
stabilizing the core cookbooks and our testing frameworks. We hope to
be able to carve out some time between stabilizing Pike and planning
for Queens.

A big thank you goes out to everyone who pitched in, no matter how
small. Even the tactical contributions have been beneficial.

For any issues, or just to say hi, feel free to drop by
#openstack-chef and idle. Onward to Pike!

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Proposing Lance Albertson (Ramareth) for openstack-chef-core

2017-08-11 Thread Samuel Cassiba
On Tue, Aug 8, 2017 at 7:24 PM, Samuel Cassiba <s...@cassiba.com> wrote:
> Ohai!
>
> In openstack-chef-land, we generally don't use the ML very often,
> being so few; thus, when there's an occasion to send something, it
> better be worth it.
>
> I am proposing adding Lance Albertson (otherwise known as Ramareth on
> IRC and most other places I frequent) to openstack-chef-core. In
> another life, he is the Director of the OSU Open Source Lab.
>
> That would bring our core count up to four, but it's just a mirage.
> None of us can dedicate our full, or even quarter time to this
> project, which already requires a certain level of exposure to not
> only the nuances of Chef and Ruby, but the quirks of OpenStack and
> Python. However, we do what we can, when we can, how we can.
>
> Any feedback is welcome. If there are no reasons otherwise, Lance will
> be added to the core group in a few days.
>
> --
> Best,
> Samuel Cassiba


After discussing on IRC, we have a unanimous approval. By the power of
greyskull, it is so. Welcome!

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Proposing Lance Albertson (Ramareth) for openstack-chef-core

2017-08-08 Thread Samuel Cassiba
Ohai!

In openstack-chef-land, we generally don't use the ML very often,
being so few; thus, when there's an occasion to send something, it
better be worth it.

I am proposing adding Lance Albertson (otherwise known as Ramareth on
IRC and most other places I frequent) to openstack-chef-core. In
another life, he is the Director of the OSU Open Source Lab.

That would bring our core count up to four, but it's just a mirage.
None of us can dedicate our full, or even quarter time to this
project, which already requires a certain level of exposure to not
only the nuances of Chef and Ruby, but the quirks of OpenStack and
Python. However, we do what we can, when we can, how we can.

Any feedback is welcome. If there are no reasons otherwise, Lance will
be added to the core group in a few days.

-- 
Best,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread Samuel Cassiba
On Jul 14, 2017, at 14:10, Ed Leafe <e...@leafe.com> wrote:
> 
> On Jul 14, 2017, at 2:17 PM, Zane Bitter <zbit...@redhat.com> wrote:
> 
>> * The pool of OpenStack developers is a fixed resource, and if we make it 
>> clear that some projects are unwelcome then their developers will be 
>> reassigned to 'core' projects in a completely zero-sum process. (Nnope.)
> 
> Yeah, I’ve heard this many times, and always shake my head. If I want to work 
> on X, and X is not in OpenStack governance, I’m going to work on that anyway 
> because I need it. Or maybe on a similar project. I’m going to scratch my 
> itch.
> 
>> * While code like e.g. the Nova scheduler might be so complicated today that 
>> even the experts routinely complain about its terrible design,[1] if only we 
>> could add dozens more cooks (see above) it would definitely get much simpler 
>> and easier to maintain. (Bwahahahahahahaha.)
> 
> No, they need to appoint me as the Scheduler Overlord with the power to smite 
> all those who propose complicated code!
> 
>> * Once we make it clear to users that under no circumstances will we ever 
>> e.g. provide them with notifications about when a server has failed, ways to 
>> orchestrate a replacement, and an API to update DNS to point to the new one, 
>> then they will finally stop demanding bloat-inducing VMWare/oVirt-style 
>> features that enable them to treat cloud servers like pets. (I. don't. even.)
> 
> Again, itches will be scratched. What I think is more important is a 
> marketing issue, not a technical one. When I think of what it means to be a 
> “core” project, I think of things that people looking to “get cloudy” would 
> likely want. It isn’t until you start using a cloud that the additional 
> projects you mention become important. So simplifying what is presented to 
> the cloud market is a good thing, as it won’t confuse people as to what 
> OpenStack is. But that doesn’t require any of the other projects be stopped 
> or in any way discouraged.
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Chiming in from the believed-to-be-dead Chef project, I work on it because it 
scratches my itch. I served as PTL because it did and does scratch my itch. 
Working on it in any capacity that moves things forward continues to scratch 
that itch. We have less of a technical problem, not to downplay our tech debt, 
as we’re still pushing patches and shuffling reviews. However, we have a huge 
perception problem and equally large marketing problem, which is apparently an 
unwritten side job of being a PTL. We didn’t get that memo until the Big Tent 
was deemed too smothering. The fun part about being a PTL with effectively no 
team is that, when you or your counterpart isn’t actively marketing and 
spending more time making noise than working, people call you dead to your 
face. Even when you spend the time and money to go to marketing events.

--
Best,

Samuel Cassiba



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec (https://review.openstack.org/#/c/392485/)

2017-06-26 Thread Samuel Bercovici
Carlos,

We are in Israel and would like the meeting to happen at a more favorable time 
to us, for example 9:00AM CDT.

-Sam.


From: Carlos Puga [mailto:carlos.p...@walmart.com]
Sent: Monday, June 26, 2017 7:04 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec 
(https://review.openstack.org/#/c/392485/)

Octavia Team,

As per our last octavia meeting, I'm scheduling a webex so that we may speak 
through the teams preference on the design of the flavor spec.  I'd like to 
purpose we meet up on Thursday 3pm CDT.  If this day/time doesn't work for most 
please let me know and I can change it to best accommodate as many as possible.

Thank you,
Carlos Puga



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Samuel Cassiba
ject pattern is an anti-pattern to
OpenStack’s use case and we’re now getting to the point of criticality. Past
iterations of the Chef method include using git submodules and the GitHub
workflow, as well as One Repo To Rule Them All. They’re in the past, gone and
left to the ages. Those didn’t work because they tried to be too opinionated,
or too clever, without looking at the user experience.

While I agree that the repo creep is real, there has to be a balance. The Chef
method to OpenStack has been around for a long time in both Chef and OpenStack
terms, and has generally followed the same pattern of one repo per subproject.
We still have users[1], most of whom have adopted this pattern and have been in
production, some for years, myself included. What, I ask, happens to their
future if Chef were to shake things up and pivot to a One Repo To Rule Them All
model? Not everyone can pivot, and some would be effectively left to rot with
what would now be considered tech debt by those closer to upstream. “If it
ain’t broke, don’t fix it” is still a strong force to contend with, whether we
like it or not. Providing smooth, clear paths to a production-grade open cloud
should be the aim, not what the definition of is, is, even if that is what
comes naturally to groups of highly skilled, highly technical people.

> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[1]: https://www.openstack.org/assets/survey/April2017SurveyReport.pdf (Pg. 42)


--
Best,

Samuel Cassiba



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Samuel Cassiba
’t easily change the universe to 
turn their $10MM+ production clouds on a dime. Even at the Boston Summit, there 
were whispers of some people still using Chef. Chef hasn’t “effectively died”, 
just become way less shiny, boring even, without marketing and a strong team 
advocating for it. Downstream Chef users seem to be happy to maintain forks and 
wrappers, so long as there is an upstream to track when it’s time to jump to 
the next release. Downstream cares that things work and that they don’t give 
sudden surprises. Slashing the ecosystem, or whatever label you want to give it 
today, is a huge surprise to those who don’t have time track openstack-dev or 
master branches.

> 
> But the deployment and packaging space will always (IMHO) be the domain of 
> the Next Shiny Thing.
> 
> Witness containers vs. VMs (as deployment targets)
> 
> Witness OS packages vs. virtualenv/pip installs vs. application container 
> images.
> 
> Witness Pacemaker/OCF resource agents vs. an orchestrated level-based 
> convergence system like k8s or New Heat.
> 
> Witness LTS releases vs. A/B deployments vs. continuous delivery.
> 
> Witness PostgreSQL vs. MySQL vs. NoSQL vs. NewSQL.
> 
> Witness message queue brokers vs. 0mq vs. etcd-as-system-bus.
> 
> As new tools, whether fads or long-lasting, come and go, so do deployment 
> strategies and tooling. I'm afraid this won't change any time soon :)

Not unless things get way more delightful. In my infrastructure, I like for 
things to Just Work and to not have to shake up the landscape every major 
release, especially at the current release cadences, and I’d be surprised if 
you found someone who didn’t share that sentiment. The user survey says there 
are still downstream users of all of the deployment projects, even the one 
currently being shown the door; and now those downstream Fuel users are out in 
the cold without an obvious path forward, save for “lol deal with it". From my 
point of view, we mustn’t lose sight of that just because a given project 
doesn’t have enough throughput, or enough developers, just because a vendor 
moves on to something newer and shinier.

> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best,

Samuel Cassiba




signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-04-13 Thread Samuel Cassiba
Hi Tony,

I’m not the PTL, but I asked in our channel. You can EOL Chef OpenStack as 
well. Thanks!

--
Best,

Samuel Cassiba


signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Samuel Cassiba

> On Feb 15, 2017, at 08:49, Alex Schultz <aschu...@redhat.com> wrote:
> 
> On Wed, Feb 15, 2017 at 9:02 AM, Samuel Cassiba <s...@cassiba.com> wrote:
>>> On Feb 15, 2017, at 02:07, Thierry Carrez <thie...@openstack.org> wrote:
>>> 
>>> Samuel Cassiba wrote:
>>>> [...]
>>>> *TL;DR* if you don't want to keep going -
>>>> OpenStack-Chef is not in a good place and is not sustainable.
>>>> [...]
>>> 
>>> Thanks for sharing, Sam.
>> 
>> Thanks for taking the time to read and respond. This was as hard to write as 
>> it was to read. As time went on, it became apparent that this retrospective 
>> needed to exist. It was not written lightly, and does not aim to point 
>> fingers.
>> 
>>> I think that part of the reasons for the situation is that we grew the
>>> number of options for deploying OpenStack. We originally only had Puppet
>>> and Chef, but now there is Ansible, Juju, and the various
>>> Kolla-consuming container-oriented approaches. There is a gravitational
>>> attraction effect at play (more users -> more contributors) which
>>> currently benefits Puppet, Ansible and Kolla, to the expense of
>>> less-popular community-driven efforts like OpenStackChef and
>>> OpenStackSalt. I expect this effect to continue. I have mixed feelings
>>> about it: on one hand it reduces available technical options, but on the
>>> other it allows to focus and raise quality…
>> 
>> You have a very valid point. One need only look at the trends over the 
>> cycles in the User Survey to see this shift in most places. Ansible wins due 
>> to sheer simplicity for new deployments, but there are also real business 
>> decisions that go behind automation flavors at certain business sizes. This 
>> leaves them effectively married to whichever flavor chosen. That shift 
>> impacts Puppet’s overall user base, as well, though they had and still have 
>> the luxury of maintaining sponsored support at higher numbers.
> 
> To chime in on the Puppet side, we've seen a decrease in contributors
> over the last several cycles and I have a feeling we'll be in the same
> boat in the near future.  The amount of modules that we have to try
> and manage versus the amount of folks that we have contributing is
> getting to an unmanageable state.  I believe the only way we've gotten
> to where we have been is due to the use within Fuel and TripleO.  As
> those projects evolve, it directly impacts the ability for the Puppet
> modules to remain relevant.  Some could argue that's just the way it
> goes and technologies evolve which is true.  But it's also a loss for
> many of the newer methods as they are losing all of the historical
> knowledge and understanding that went with it and why some patterns
> work better than others.  The software wheel, it's getting reinvented
> every day.

Thank you for your perspective from the Puppet side. The Survey data alone
paints a certain narrative, and not one I think people want. If OpenStack
deployment choice is down to a popularity contest, the direct result is
fewer avenues back into OpenStack.

Fewer people will think to pick OpenStack as a viable option if it simply
doesn’t support their design, which means less exposure for non-core
projects, less feedback for core projects, rinse, repeat. Developers can and
would coalesce around but a couple of the most popular options, which works
if that’s the way things are intending to go. With that, the OpenStack story
starts to tell less like an ecosystem and more like a distro, bordering on
echo chamber. I don’t think anyone signed up for that. On the other hand,
fewer deployment options allow for more singular focus. Without all that
choice clouding decision-making, one has no way to OpenStack but those few
methods that everyone uses.

>> Chef’s sponsored support has numbered far fewer. It casts an extremely 
>> negative image on OpenStack when someone looks for help at odd hours, or 
>> asks something somewhere that none of us have time to track. The answer to 
>> that is the point of making noise, to generate conversation about avenues 
>> and solutions. I could have kept my fingers aiming at LP, Gerrit and IRC in 
>> an attempt to bury my head in the sand. We’re way past the point of denial, 
>> perhaps too far, but as long as the results of the User Survey shows Chef, 
>> there are still users to support, for now. Operators and deployers will be 
>> looking to the source of truth, wherever that is, and right now that source 
>> of truth is OpenStack.
>> 
>>> There is one question I wanted to ask you in terms of community. We
>>> maintain in Open

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Samuel Cassiba

> On Feb 15, 2017, at 02:07, Thierry Carrez <thie...@openstack.org> wrote:
> 
> Samuel Cassiba wrote:
>> [...]
>> *TL;DR* if you don't want to keep going -
>> OpenStack-Chef is not in a good place and is not sustainable.
>> [...]
> 
> Thanks for sharing, Sam.
> 

Thanks for taking the time to read and respond. This was as hard to write as it 
was to read. As time went on, it became apparent that this retrospective needed 
to exist. It was not written lightly, and does not aim to point fingers.

> I think that part of the reasons for the situation is that we grew the
> number of options for deploying OpenStack. We originally only had Puppet
> and Chef, but now there is Ansible, Juju, and the various
> Kolla-consuming container-oriented approaches. There is a gravitational
> attraction effect at play (more users -> more contributors) which
> currently benefits Puppet, Ansible and Kolla, to the expense of
> less-popular community-driven efforts like OpenStackChef and
> OpenStackSalt. I expect this effect to continue. I have mixed feelings
> about it: on one hand it reduces available technical options, but on the
> other it allows to focus and raise quality…

You have a very valid point. One need only look at the trends over the cycles 
in the User Survey to see this shift in most places. Ansible wins due to sheer 
simplicity for new deployments, but there are also real business decisions that 
go behind automation flavors at certain business sizes. This leaves them 
effectively married to whichever flavor chosen. That shift impacts Puppet’s 
overall user base, as well, though they had and still have the luxury of 
maintaining sponsored support at higher numbers.

Chef’s sponsored support has numbered far fewer. It casts an extremely negative 
image on OpenStack when someone looks for help at odd hours, or asks something 
somewhere that none of us have time to track. The answer to that is the point 
of making noise, to generate conversation about avenues and solutions. I could 
have kept my fingers aiming at LP, Gerrit and IRC in an attempt to bury my head 
in the sand. We’re way past the point of denial, perhaps too far, but as long 
as the results of the User Survey shows Chef, there are still users to support, 
for now. Operators and deployers will be looking to the source of truth, 
wherever that is, and right now that source of truth is OpenStack.

> 
> There is one question I wanted to ask you in terms of community. We
> maintain in OpenStack a number of efforts that bridge two communities,
> and where the project could set up its infrastructure / governance in
> one or the other. In the case of OpenStackChef, you could have set up
> shop on the Chef community side, rather than on the OpenStack community
> side. Would you say that living on the OpenStack community side helped
> you or hurt you ? Did you get enough help / visibility to balance the
> constraints ? Do you think you would have been more, less or equally
> successful if you had set up shop more on the Chef community side ?
> 

We set up under Stackforge, later OpenStack, because the cookbooks evolved 
alongside OpenStack, as far back as 2011, before my time in the cookbooks. The 
earliest commits on the now EOL Grizzly branch were quite enlightening, if only 
Stackalytics had the visuals. Maybe I’m biased, but that’s worth something.

You’re absolutely correct that we could have pushed more to set up the Chef 
side of things, and in fact we made several concerted efforts to integrate into 
the Chef community, up to and including having sponsored contributors, even a 
PTL. When exploring the Chef side, we found that we faced as much or more 
friction with the ecosystem, requiring more fundamental changes than we could 
influence. Chef (the ecosystem) has many great things, but Chef doesn’t 
OpenStack. Maybe that was the writing on the wall.

I keep one foot in both Chef and OpenStack, to keep myself as informed as time 
allows me. It’s clear that even Chef’s long-term cookbook support community is 
ill equipped to handle OpenStack. The problem? We’re too complex and too far 
integrated, and none of them know OpenStack. Where does that leave us?

--
Best,

Samuel Cassiba

> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-14 Thread Samuel Cassiba
The HTML version is here:
https://s.cassiba.com/2017/02/14/making-the-kitchen-great-again-a-retrospective-on-openstack-chef
 


This was influenced by Graham Hayes' State of the Project for Designate:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/ 


I have been asked recently "what is going on with the OpenStack-Chef project?",
"how is the state of the cookbooks?", and "hey sc, how are those integration
tests coming?". Having been the PTL for the Newton and Ocata cycles, yet
having not shipped a release, is the unthinkable, and deserves at least a
sentence or two.

It goes without saying, this is disheartening and depressing to me and
everybody that has devoted their time to making the cookbooks a solid
and viable method for deploying OpenStack. OpenStack-Chef is among the
oldest[1] and most mature solutions for deploying OpenStack, though it is
not the most feature-rich.


*TL;DR* if you don't want to keep going -
OpenStack-Chef is not in a good place and is not sustainable.


OpenStack-Chef has always been a small project with a big responsibility.
The Chef approach to OpenStack historically has required a level of
investment within the Chef ecosystem, which is a hard enough sell when you
started out with Puppet or Ansible. Despite the unicorns and rainbows of
being Chef cookbooks, OpenStack-Chef always asserted itself as an OpenStack
project first, up to and including joining the Big Tent, whatever it takes.
To beat that drum, we are OpenStack.

There is no *cool* factor from deploying and managing OpenStack using Chef,
unless you've been running Chef, because insert Xzibit meme here and jokes
about turtles. Unless you break something with automation, then it's
applause or facepalm. Usually both. At the same time.

As with any kitchen, it must be stocked and well maintained, and
OpenStack-Chef is no exception. Starting out, there was a vibrant community
producing organic, free-range code. Automation is invisible, assumed to be
there in the background. Once it's in place, it isn't touched again unless
it breaks. Upgrades in complex deployments can be fraught with error, even
in an automated fashion.

As has been seen in previous surveys[2], once an OpenStack release has chosen
by an operator, some tend to not upgrade for the next cycle or three, to get
the immediate bugs worked out. Though there are now multinode and upgrade
scenarios supported with the Puppet OpenStack and TripleO projects, they do
not use Chef, so Chef deployers do not directly benefit from any of this
testing.

Being a deployment project, we are responsible for not one aspect of
the OpenStack project but as many as can be reasonably supported.

We were very fortunate in the beginning, having support from public cloud
providers, as well as large private cloud providers. Stackalytics shows a
vibrant history, a veritable who's-who of OpenStack contributors, too many to
name. They've all moved on, working on other things.

As a previous PTL for the project once joked, the Chef approach to OpenStack
was the "other deployment tool that nobody uses". As time has gone by, that has
become more of a true statement.

There are a few of us still cooking away, creating new recipes and cookbooks. 
The
pilot lights are still lit and there's usually something simmering away on the
back burner, but there is no shouting of orders, and not every dish gets tasted.
We think there might be rats, too, but we’re too shorthanded to maintain the 
traps.

We have yet to see many (meaningful) contributions from the community, however.
We have some amazing deployers that file bugs, and if they can, push up a patch.
It delights me when someone other than a core weighs in on a review. They are
highly appreciated and incredibly valuable, but they are very tactical
contributions. A project cannot live on such contributions.

October 2015

  https://s.cassiba.com/images/oct-2015-deployment-decisions.png 


Where does that leave OpenStack-Chef? Let's take a look at the numbers:

+++
| Cycle  | Commits |
+++
| Havana   | 557|
+++
| Icehouse | 692|
+++
| Juno   | 424 |
+++
| Kilo | 474 |
+++
| Liberty| 259 |
+++
| Mitaka| 85   |
+++
| Newton   | 112 |
+++
| Ocata  | 78  |
+++

As of the time of this writing, Newton has not yet branched. Yes, you read
correctly. This means the Ocata cycle has gone to ensuring that Newton *just

[openstack-dev] [keystone] PTL candidacy

2017-01-24 Thread Samuel de Medeiros Queiroz
Hello everyone!

I have been involved in OpenStack since late 2013 and now it is time
to put my name forward as a candidate for keystone PTL during the Pike
development cycle. See my openstack/election change in [1].


As your PTL, I would like to see our team's focuses classified into
three categories, as enumerated and detailed below:

1. Community: keeping keystone a great place to contribute

As a team, we need to ensure our project is always a welcoming place
to new contributors.

I would like to encourage community members who have more experience
in the project to take leadership roles, such as mentoring GSoC and
Outreachy programs.

Regarding those programs, it would be great to elaborate a list of
projects we consider to be interesting and that can be scoped in an
internship.

In addition, I would like to keep identifying and rewarding community
members who have been doing an outstanding job, as this helps on
keeping them motivated and knowing how great and important they are
to our community.

2. Features and testing: establishing a consistent roadmap

In terms of features, I would like to see our team keeping hardening
some features that have landed in Ocata, such as PCI-DSS and federated
auto-provisioning.

Some others would be targeted to Pike, including continuing to work in
the solution for solving the issue with long-running operations and
token expiry, and improvements in the policy mechanisms.

In terms of testing, the team has been doing a great job. Some
functional tests have been added in Ocata. I would like to see our
team to continue improving our tests in order to make sure we continue
to deliver high quality code to our users.

3. Docs: revisiting and ensuring consistency and completeness

I would like to keep revisiting our developer docs in order to make
sure new contributors have a smoothly experience when onboarding.

Furthermore, I would like to note that there is no point in having a
code that behaviors correctly if we do not teach our users how to use
our service.

With that said, in order to keep improving usability, I would like to
see the team making sure the docs are accurate and complete for our
API consumers and deployers.

There are a few ideas on how to improve docs out there, such as
api-guide docs, which main goal is to conceptually explain the service.

--

I will hapilly discuss those goals with the team during our first PTG
in Atlanta. I am looking forward to seeing you there!

Thank you,
Samuel de Medeiros Queiroz - samueldmq


[1] https://review.openstack.org/#/c/424239/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging liberty as EOL

2016-12-19 Thread Samuel Cassiba
> 
> On Dec 19, 2016, at 14:31, Tony Breeds <t...@bakeyournoodle.com> wrote:
> 
> On Mon, Dec 19, 2016 at 09:18:20AM -0800, Samuel Cassiba wrote:
> 
>> The Chef OpenStack cookbooks team is way late to the party. The cookbooks
>> (openstack/cookbook-openstack-*, openstack/openstack-chef-repo ) should have
>> had their liberty branches EOL’d. I have checked and no open reviews exist
>> against liberty.
> 
> Thanks.
> 
> While I have your attention what about your older branches.  Is there any
> mertit keeping the kilo branches in openstack/cookbook-openstack-* or the
> stable/{grizzly,havana,icehouse,juno} branches in 
> openstack/openstack-chef-repo
> 

No, no merit in keeping older branches around. They can be rightfully EOL’d as 
well.

Many thanks!

> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging liberty as EOL

2016-12-19 Thread Samuel Cassiba
On Nov 21, 2016, at 18:35, Tony Breeds <t...@bakeyournoodle.com> wrote:
> 
> Hi all,
>I'm late in sending this announement, but I'm glad to see several projects
> have already requested EOL releases to make it trivial and obvious where to
> apply the tag.
> 
> I'm proposing to EOL all projects that meet one or more of the following
> criteria:
> 
> - The project is openstack-dev/devstack, openstack-dev/grenade or
>  openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
>  project-config:zuul/layout.yaml
> - The project gates with either devstack or grenade jobs
> - The project is listed in governance:reference/projects.yaml and is tagged
>  with 'stable:follows-policy'.
> 
> 
> Some statistics:
> All Repos  : 1493 (covered in zuul/layout.yaml)
> Checked Repos  :  406 (match one or more of the above 
> criteria)
> Repos with liberty branches:  305
> EOL Repos  :  171 (repos that match the criteria *and* 
> have
>   a liberty branch) [1]
> NOT EOL Repos  :  134 (repos with a liberty branch but
>   otherwise do not match) [2]
> DSVM Repos (staying)   :   68 (repos that use dsvm but don't have
>   liberty branches)
> Open Reviews   :   94 (reviews to close)
> 
> 
> Please look over both lists by 2016-11-27 00:00 UTC and let me know if:
> - A project is in list 1 and *really* *really* wants to opt *OUT* of EOLing 
> and
>  why.  Note doing this will amost certainly reduce the testing coverage you
>  have in the gate.
> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
> 
> Any projects that will be EOL'd will need all open reviews abandoned before it
> can be processed.  I'm very happy to do this, or if I don't have permissios to
> do it ask a gerrit admin to do it.
> 
> I'll batch the removal of the stable/liberty branches between Nov 28th and Dec
> 3rd (UTC+1100).  Then during Decemeber I'll attempt to cleanup 
> zuul/layout.yaml
> to remove liberty exclusions and jobs.
> 
> Yours Tony.
> 
> [1] 
> https://gist.github.com/tbreeds/93cd346c37aa46269456f56649f0a4ac#file-liberty_eol_data-txt-L1
> [2] 
> https://gist.github.com/tbreeds/93cd346c37aa46269456f56649f0a4ac#file-liberty_eol_data-txt-L181
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The Chef OpenStack cookbooks team is way late to the party. The cookbooks 
(openstack/cookbook-openstack-*, openstack/openstack-chef-repo ) should have 
had their liberty branches EOL’d. I have checked and no open reviews exist 
against liberty.

--
Best,

Samuel Cassiba


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-13 Thread Samuel Cassiba

> On Dec 13, 2016, at 19:43, Ed Leafe  wrote:
> 
> On Dec 12, 2016, at 10:30 PM, Steven Dake (stdake)  wrote:
> 
>> The issue raised is they violate the 4 opens.
> 
> Not necessarily. If you have regular planning meetings and discussions in an 
> open manner as prescribed, an occasional conference to discuss a particular 
> matter is not a "violation". What if someone in your office is working on 
> OpenStack too, and you meet in the hallway and discuss something technical? 
> Does that violate the 4 Opens?
> 
> I think we have to balance realism with idealism.

It wouldn’t be the first time video chats were shot down. As I recall, one of 
the conditions for the OpenStack Chef cookbooks to become an official OpenStack 
project was that we gave up our weekly Hangouts meetings in favor of weekly IRC 
meetings. As it was, when the cookbooks were still considered StackForge, links 
were sent out to the mailing list and channel prior to the meeting starting, to 
give people a time to get coffee, comb their hair and put on a shirt (pants 
optional).

Today, we do not hold weekly meetings as the cores are either west coast US or 
Europe, so pretty much every time is bad, as we have minimal overlap. It used 
to be pretty easy to point at a video call and say “I’m doing that right 
there”. Not so much to get an hour dedicated to IRC, because of the very nature 
of IRC, so we lost folks to the winds of change. At some point in the Newton 
cycle, we did not see much value in holding weekly IRC meetings, as we were 
just echoing what we said in our dedicated channel, so we gave up our scheduled 
slot. From the founding team, only two members remain.To date, one core has 
joined, bringing us up to three, down from eight, spread across two continents. 
The picture I paint is not good eats.

As PTL and direct consumer of the output of the cookbooks, I feel that 
eliminating the option to hold our meetings via video chat was a detrimental 
blow to the project's trajectory, as a result of becoming an OpenStack project. 
Given the cookbooks’ complexity and the ability to get shit done that came from 
having that virtual face-to-face time, it made sense to sit down and “uhm" and 
“hrm" about things with a like-minded individual, obligatory link in the 
channel for those playing along on IRC.

Since giving up Hangouts, we have had minimal auditory/visual interaction in 
the effort of “transparency” and being “open” on IRC. I recall that we had 
exactly one video chat since becoming an official project, and it was immensely 
useful for the few minutes we talked, and got more across than a day’s worth of 
IRC meetings.  Beyond that, our face time has involved meeting up at a given 
Summit that we all happen to attend, which is entirely too long to go between 
seeing teammates IMHO. The PTG isn’t of much benefit to the cookbooks, either, 
as it’s a non-trivial distance and expense for all of the cores for not much 
gain, when one of us can just shift hours for a video call.

-sc

> 
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pike PTL

2016-12-02 Thread Samuel de Medeiros Queiroz
Hey Steve,

Thanks for all your dedication, you've been a great leader!
It's been a pleasure to serve keystone with you as PTL.

Samuel

On Tue, Nov 29, 2016 at 12:19 PM, Brad Topol <bto...@us.ibm.com> wrote:

> +1! Great job Steve
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Henry Nash ---11/23/2016 11:08:25
> AM---Steve, It’s been a pleasure working with you as PTL - an exce]Henry
> Nash ---11/23/2016 11:08:25 AM---Steve, It’s been a pleasure working with
> you as PTL - an excellent tenure. Enjoy taking some time ba
>
> From: Henry Nash <henryna...@mac.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 11/23/2016 11:08 AM
> Subject: Re: [openstack-dev] [keystone] Pike PTL
> --
>
>
>
> Steve,
>
> It’s been a pleasure working with you as PTL - an excellent tenure. Enjoy
> taking some time back!
>
> Henry
>
>On 21 Nov 2016, at 19:38, Steve Martinelli <*s.martine...@gmail.com*
>   <s.martine...@gmail.com>> wrote:
>
>   one of these days i'll learn how to spell :)
>
>   On Mon, Nov 21, 2016 at 12:52 PM, Steve Martinelli <
>   *s.martine...@gmail.com* <s.martine...@gmail.com>> wrote:
>  Keystoners,
>
>  I do not intend to run for the PTL position of the Pike
>  development cycle. I'm sending this out early so I can work with 
> folks
>  interested in the role, If you intend to run for PTL in Pike and are
>  interested in learning the ropes (or just want to hear more about 
> what the
>  role means) then shoot me an email.
>
>  It's been an unforgettable ride. Being PTL a is very rewarding
>  experience, I encourage anyone interested to put your name forward. 
> I'm not
>  going away from OpenStack, I just think three terms as PTL has been 
> enough.
>  It'll be nice to have my evenings back :)
>
>  To *all* the keystone contributors (cores and non-cores), thank
>  you for all your time and commitment. More importantly thank you for
>  putting up with my many questions, pings, pokes and -1s. Each of you 
> are
>  amazing and together you make an awesome team. It has been an 
> absolute
>  pleasure to serve as PTL, thank you for letting me do so.
>
>  stevemar
>
>
>  
>
>  Thanks for the idea Lana [1]
>  [1]
>  
> *http://lists.openstack.org/pipermail/openstack-docs/2016-November/009357.html*
>  
> <http://lists.openstack.org/pipermail/openstack-docs/2016-November/009357.html>
>
>   
>   __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe: *openstack-dev-requ...@lists.openstack.org*
>   <openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>   <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] draft logo & sneak peek

2016-10-24 Thread Samuel Cassiba
Ohai Chefs,

Here is the draft of our project logo. As you may remember, we requested a 
kangaroo some time ago to resemble the Xzibit meme-like way of how we deploy 
OpenStack, and this is the draft of that decision. I have enclosed the original 
message, which includes a link for feedback.

Since neither myself nor the other cores will be at the PTG, we’ll have to work 
out some way to get the final product distributed if interested.

Have a great Summit!

Best,
Samuel Cassiba
Project Team Lead, OpenStack Chef

> Begin forwarded message:
> 
> From: Heidi Joy Tretheway <heidi...@openstack.org>
> Subject: Your draft logo & sneak peek
> Date: October 21, 2016 at 10:12:54 PDT
> To: Samuel Cassiba <s...@cassiba.com>
> 
> Hi,
> 
> We're excited to show you the draft version of your project logo, attached. 
> We want to give you and your team a chance to see the mascot illustrations 
> before we make them official, so we decided to make Barcelona the draft 
> target, with final logos ready by the Project Team Gathering in Atlanta in 
> February. 
> 
> Our illustrators worked as fast as possible to draft nearly 60 logos, and 
> we're thrilled to see how they work as a family. Here's a 50-second "sneak 
> peek" at how they came together: https://youtu.be/JmMTCWyY8Y4 
> <https://youtu.be/JmMTCWyY8Y4>
> 
> We welcome you to share this logo with your team and discuss it in Barcelona. 
> We're very happy to take feedback on it if we've missed the mark. The style 
> of the logos is consistent across projects, and we did our best to 
> incorporate any special requests, such as an element of an animal that is 
> especially important, or a reference to an old logo.
> 
> We ask that you don't start using this logo now since it's a draft. Here's 
> what you can expect for the final product:
> A horizontal version of the logo, including your mascot, project name and the 
> words "An OpenStack Community project"
> A square(ish) version of the logo, including all of the above
> A mascot-only version of the logo
> Stickers for all project teams distributed at the PTG
> One piece of swag that incorporates all project mascots, such as a deck of 
> playing cards, distributed at the PTG
> All digital files will be available through the website
> 
> We know this is a busy time for you, so to take some of the burden of 
> coordinating feedback off you, we made a feedback form: 
> http://tinyurl.com/OSmascot <http://tinyurl.com/OSmascot>  You are also 
> welcome to reach out to Heidi Joy directly with questions or concerns. Please 
> provide feedback by Friday, Nov. 11, so that we can request revisions from 
> the illustrators if needed. Or, if this logo looks great, just reply to this 
> email and you don't need to take any further action.
> 
> Thank you!
> Heidi Joy Tretheway - project lead
> Todd Morey - creative lead
> 
> P.S. Here's an email that you can copy/paste to send to your team (remember 
> to attach your logo from my email):
> 
> Hi team, 
> I just received a draft version of our project logo, using the mascot we 
> selected together. A final version (and some cool swag) will be ready for us 
> before the Project Team Gathering in February. Before they make our logo 
> final, they want to be sure we're happy with our mascot. 
> 
> We can discuss any concerns in Barcelona and you can also provide direct 
> feedback to the designers: http://tinyurl.com/OSmascot 
> <http://tinyurl.com/OSmascot>  Logo feedback is due Friday, Nov. 11. To get a 
> sense of how ours stacks up to others, check out this sneak preview of 
> several dozen draft logos from our community: https://youtu.be/JmMTCWyY8Y4 
> <https://youtu.be/JmMTCWyY8Y4>
> 
>   
> Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation
> 503 816 9769  | Skype: heidi.tretheway 
> <https://webapp.wisestamp.com/sig_iframe?origin=mac-mail_id=5499768844845056=0.9726545857097719#>
>  <http://linkedin.com/in/heiditretheway>  <http://twitter.com/heiditretheway> 
>  <http://www.openstack.org/>
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][chef][puppet][salt][openstack-ansible][HA][tripleo][kolla][fuel] Schema proposal for config file handling for services

2016-10-11 Thread Samuel Cassiba
gt; this can lead to more issues than it's solving.
>>
>> Most services print the complete config when running in debug mode. So
>> getting th used config is not complicated. Also adding theses switches
>> makes it more explicit when doing e.g. "ps awxu|grep nova-api" because
>> you see then what Also knowing which files/dirs are used is just "ps
>> awxu|grep $service".
>> And afaik oslo.config already loads implicit config files if they are
>> present.
>>
>
> I think this assumes knowledge of how these services run. You also
> have to consider the junior sysadmin who has no idea this is how
> openstack services work and they go into /etc/$service to find a pile
> of configurations and are not quite sure how they are all loaded.  Or
> maybe the only view into the system is their configuration management
> tool so they aren't seeing this stuff.   Like i said i'm not against
> the main service.conf and a config directory.  I think those are
> beneficial, but I'm a fan of "simple is better".  It reduces the risk
> of misconfiguration and simplifies the troubleshooting.
>
> Since RDO already uses $service-dist.conf, if you start with that it'd
> be ok but I'd really prefer some very strict policy around what goes
> in there.  Nothing more than necessary to be $flavor specific.
> Probably only paths or package name related items.  Ideally leverage
> something that allows the $service.conf to include the $flavor
> specific configuration items in place so you don't have to manage your
> own templates.  Anything else should be handled by the end user and
> not managed in packaging.
>
> Thanks,
> -Alex
>

I concur. There shouldn't be any flavor-specific items in
$service-dist.conf. I would go one farther by saying this should align
as closely to upstream python as possible, platform nuances addressed,
of course.

In Chef, we lay down a very spartan $service.conf from node attributes
from their respective cookbooks. As I alluded to above, we used to
follow a path of managing $service.conf derived from an upstream
python $service.conf.sample. Each release was spent carefully combing
through the configs, making sure each value was as it should. Today,
each $service.conf is driven from a set of node attributes that wind
up giving a 15-20 line $service.conf that can be consumed and
understood by even novice deployers whether it be a deb or rpm
packaging system under the hood. It would be unfortunate to sacrifice
that flexibility just to wind up having to manage effectively two
lines of code to achieve the same end.

Best,

Samuel

>>> Personally unless this structure is also followed by the deb
>>> packaging, I'd prefer not to switch to this as it may lead to even
>>> more fragmentation when it comes to trying to configure OpenStack
>>> deployed via packages.  Has this been requested by an end user to
>>> solve a specific problem?  What exactly is the problem that's trying
>>> to be solved other than trying to allow for two of the same project's
>>> services being configured in (currently) conflicting fashions?
>>
>> See my anwers above. It's not only about different configs for different
>> service.
>>
>> Thanks for the feedback!
>>
>> Cheers,
>>
>> Tom
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Adding calbers to openstack-chef-core

2016-10-10 Thread Samuel Cassiba
Hi,

Given as there have been no objections, I have added calbers to
openstack-chef-core. With that, we reinstate a 2x +2 rule for changes.
Congrats Christoph!

Best,

-sc

On Wed, Oct 5, 2016 at 1:02 AM, j.kl...@cloudbau.de <j.kl...@cloudbau.de> wrote:
> Hi,
>
> sounds good, +1 from me.
>
> Cheers,
> Jan
>
>> On 05 Oct 2016, at 05:44, Samuel Cassiba <s...@cassiba.com> wrote:
>>
>> Ohai Chefs!
>>
>> I would like to nominate Christoph Albers (irc: calbers) for
>> openstack-chef-core.
>>
>> Christoph has consistently provided great quality reviews over the
>> Newton cycle. He has been instrumental in getting the cookbooks up to
>> speed with Identity v3 and openstackclient. During Mitaka, his reviews
>> were crucial to the refactor work that took place during that cycle.
>> From the quality of his reviews, he has a solid understanding of the
>> codebase and I think he is qualified to be a core reviewer.
>>
>> This will bring us back up to three dedicated core reviewers, and I
>> would like to reimplement a 2x +2 policy for changes.
>>
>> If there are no objections, I will put in a change at the end of the
>> week. Consider this a +1 vote from me.
>>
>> Thanks,
>>
>> -sc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Adding calbers to openstack-chef-core

2016-10-04 Thread Samuel Cassiba
Ohai Chefs!

I would like to nominate Christoph Albers (irc: calbers) for
openstack-chef-core.

Christoph has consistently provided great quality reviews over the
Newton cycle. He has been instrumental in getting the cookbooks up to
speed with Identity v3 and openstackclient. During Mitaka, his reviews
were crucial to the refactor work that took place during that cycle.
>From the quality of his reviews, he has a solid understanding of the
codebase and I think he is qualified to be a core reviewer.

This will bring us back up to three dedicated core reviewers, and I
would like to reimplement a 2x +2 policy for changes.

If there are no objections, I will put in a change at the end of the
week. Consider this a +1 vote from me.

Thanks,

-sc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Chef PTL candidacy

2016-09-12 Thread Samuel Cassiba
Hello everyone,

I would like to announce my candidacy to continue as OpenStack Chef
PTL for the Ocata release.[0]

= State of the Kitchen =

Over the Newton cycle, we said good bye to some contributors, and
hello to others. It slowed our overall output, but Stackalytics shows
that we still get through about 2 reviews per day. Not impressive
numbers, I know, but like LA traffic, as long as it doesn't come to a
stop, it's a good thing. Though we're down to just two cores, we're
still iterating, and have even gained some new contributors in the
process.

In Newton, we had some pretty big deliverables:

- Ubuntu 16.04
- python-openstackclient
- Identity v3
- newer ChefDKs (0.17.17 at the time of this writing)
- client cookbook based on fog-openstack
- refactor the telemetry cookbook to introduce Gnocchi
- as always, better integration

As of today, several of those things are still in implementation and
review. We're slowly gaining momentum again with the close of the
Newton cycle upon us. With the Ocata cycle being shorter, we have even
less time to get things in shape, but we'll get there.

= The Future =

My goals for the Ocata cycle are to:
- continue getting integration to an unbroken state so that it can be
relied upon for upstream and downstream build health
- furthering the documentation efforts that took place during Mitaka
- get more people familiar with the project to the point where we can
promote some of them to core

As far as process goes, I don't anticipate any big sweeping process
changes for the project over the next cycle. The processes that we
have agreed to seem to be working thus far.

With the Ocata cycle, I want to focus more on engaging new developers
and understanding their obstacles, to ensure that people can get
started deploying OpenStack and hacking on cookbooks.

I look forward to continuing to steer the OpenStack Chef cookbooks as
well as collaborating with other projects within the greater OpenStack
project.

Yours,

Samuel Cassiba

[0] https://review.openstack.org/369027

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] PTL candidacy

2016-09-12 Thread Samuel de Medeiros Queiroz
Hello, everyone!

First of all, I would like to thanks all the previous Keystone PTLs,
core reviewers and contributors who have made OpenStack and Keystone
what they are today. It is been a pleasure for me to work in this team
and learn with you all. I would be honored to be your PTL during the
next development cycle.

That said, I would like to put my name forward as a candidate for PTL
during the Ocata release [1].

I have been involved in OpenStack since the Icehouse release and I have
been a core reviewer on Keystone since the Mitaka release. During the
Newton cycle, I also served as the Keystone cross-project liaison and
participated as a mentor for the Outreachy program.

My main focuses have been helping new developers to contribute to the
project and improving documentation. I have been doing my best to review
elected priorities, helping the team to deliver what was scoped to the
release cycle.

Given that the Ocata development window is shorter, my main goal for
Ocata is to focus on the stability and usability of the project. My
primary subgoals include a) tackling the issue with long-running
operations and token expiry, b) keeping up the good work on improving
api-ref docs and creating the api-guide docs, and c) ensuring Keystone
is a friendly place for new contributors to get started. Other subgoals
that are nice to have but still important are d) broadening our tests
environments to include functional tests for LDAP backends, auditing,
and federated identity workflows, and e) making the default RBAC
authorization more granular.

Thank you,
Samuel de Medeiros Queiroz

[1] https://review.openstack.org/#/c/369002/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new core reviewer (rderose)

2016-09-01 Thread Samuel de Medeiros Queiroz
Ronald,

congrats, well deserved! Welcome aboard!

On Thu, Sep 1, 2016 at 11:56 AM, David Stanek  wrote:

> On Thu, Sep 01 at 10:44 -0400, Steve Martinelli wrote:
> >
> > Thanks for all your hard work Ron, we sincerely appreciate it.
> >
>
> Contrats! Well deserved for sure!
>
> --
> David Stanek
> web: http://dstanek.com
> blog: http://traceback.org
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Samuel Merritt

On 6/7/16 12:00 PM, Monty Taylor wrote:

[snip]

>

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.


Asyncio, eventlet, and other similar libraries are all very good for 
performing asynchronous IO on sockets and pipes. However, none of them 
help for filesystem IO. That's why Swift needs a golang object server: 
the go runtime will keep some goroutines running even though some other 
goroutines are performing filesystem IO, whereas filesystem IO in Python 
blocks the entire process, asyncio or no asyncio.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Samuel Merritt

On 5/11/16 7:09 AM, Thomas Goirand wrote:

On 05/10/2016 09:56 PM, Samuel Merritt wrote:

On 5/9/16 5:21 PM, Robert Collins wrote:

On 10 May 2016 at 10:54, John Dickinson <m...@not.mn> wrote:

On 9 May 2016, at 13:16, Gregory Haynes wrote:


This is a bit of an aside but I am sure others are wondering the same
thing - Is there some info (specs/etherpad/ML thread/etc) that has more
details on the bottleneck you're running in to? Given that the only
clients of your service are the public facing DNS servers I am now even
more surprised that you're hitting a python-inherent bottleneck.


In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,

...

Initial results from a golang reimplementation of the object server in
Python are very positive[1]. We're not proposing to rewrite Swift
entirely in Golang. Specifically, we're looking at improving object
replication time in Swift. This service must discover what data is on
a drive, talk to other servers in the cluster about what they have,
and coordinate any data sync process that's needed.

[0] Hard, not impossible. Of course, given enough time, we can do
 anything in a Turing-complete language, right? But we're not talking
 about possible, we're talking about efficient tools for the job at
 hand.

...

I'm glad you're finding you can get good results in (presumably)
clean, understandable code.

Given go's historically poor perfornance with multiple cores
(https://golang.org/doc/faq#Why_GOMAXPROCS) I'm going to presume the
major advantage is in the CSP programming model - something that
Twisted does very well: and frustratingly we've had numerous
discussions from folk in the Twisted world who see the pain we have
and want to help, but as a community we've consistently stayed with
eventlet, which has a threaded programming model - and threaded models
are poorly suited for the case here.


At its core, the problem is that filesystem IO can take a surprisingly
long time, during which the calling thread/process is blocked, and
there's no good asynchronous alternative.

Some background:

With Eventlet, when your greenthread tries to read from a socket and the
socket is not readable, then recvfrom() returns -1/EWOULDBLOCK; then,
the Eventlet hub steps in, unschedules your greenthread, finds an
unblocked one, and lets it proceed. It's pretty good at servicing a
bunch of concurrent connections and keeping the CPU busy.

On the other hand, when the socket is readable, then recvfrom() returns
quickly (a few microseconds). The calling process was technically
blocked, but the syscall is so fast that it hardly matters.

Now, when your greenthread tries to read from a file, that read() call
doesn't return until the data is in your process's memory. This can take
a surprisingly long time. If the data isn't in buffer cache and the
kernel has to go fetch it from a spinning disk, then you're looking at a
seek time of ~7 ms, and that's assuming there are no other pending
requests for the disk.

There's no EWOULDBLOCK when reading from a plain file, either. If the
file pointer isn't at EOF, then the calling process blocks until the
kernel fetches data for it.

Back to Swift:

The Swift object server basically does two things: it either reads from
a disk and writes to a socket or vice versa. There's a little HTTP
parsing in there, but the vast majority of the work is shuffling bytes
between network and disk. One Swift object server can service many
clients simultaneously.

The problem is those pauses due to read(). If your process is servicing
hundreds of clients reading from and writing to dozens of disks (in,
say, a 48-disk 4U server), then all those little 7 ms waits are pretty
bad for throughput. Now, a lot of the time, the kernel does some
readahead so your read() calls can quickly return data from buffer
cache, but there are still lots of little hitches.

But wait: it gets worse. Sometimes a disk gets slow. Maybe it's got a
lot of pending IO requests, maybe its filesystem is getting close to
full, or maybe the disk hardware is just starting to get flaky. For
whatever reason, IO to this disk starts taking a lot longer than 7 ms on
average; think dozens or hundreds of milliseconds. Now, every time your
process tries to read from this disk, all other work stops for quite a
long time. The net effect is that the object server's throughput
plummets while it spends most of its time blocked on IO from that one
slow disk.

Now, of course there's things we can do. The obvious one is to use a
couple of IO threads per disk and push the blocking syscalls out
there... and, in fact, Swift did that. In commit b491549, the object
server gained a small threadpool for each disk[1] and started doing its
IO there.

This worked pretty well for avoiding the slow-disk problem. Re

Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Samuel Merritt

On 5/9/16 5:21 PM, Robert Collins wrote:

On 10 May 2016 at 10:54, John Dickinson  wrote:

On 9 May 2016, at 13:16, Gregory Haynes wrote:


This is a bit of an aside but I am sure others are wondering the same
thing - Is there some info (specs/etherpad/ML thread/etc) that has more
details on the bottleneck you're running in to? Given that the only
clients of your service are the public facing DNS servers I am now even
more surprised that you're hitting a python-inherent bottleneck.


In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,

...

Initial results from a golang reimplementation of the object server in
Python are very positive[1]. We're not proposing to rewrite Swift
entirely in Golang. Specifically, we're looking at improving object
replication time in Swift. This service must discover what data is on
a drive, talk to other servers in the cluster about what they have,
and coordinate any data sync process that's needed.

[0] Hard, not impossible. Of course, given enough time, we can do
 anything in a Turing-complete language, right? But we're not talking
 about possible, we're talking about efficient tools for the job at
 hand.

...

I'm glad you're finding you can get good results in (presumably)
clean, understandable code.

Given go's historically poor perfornance with multiple cores
(https://golang.org/doc/faq#Why_GOMAXPROCS) I'm going to presume the
major advantage is in the CSP programming model - something that
Twisted does very well: and frustratingly we've had numerous
discussions from folk in the Twisted world who see the pain we have
and want to help, but as a community we've consistently stayed with
eventlet, which has a threaded programming model - and threaded models
are poorly suited for the case here.


At its core, the problem is that filesystem IO can take a surprisingly 
long time, during which the calling thread/process is blocked, and 
there's no good asynchronous alternative.


Some background:

With Eventlet, when your greenthread tries to read from a socket and the 
socket is not readable, then recvfrom() returns -1/EWOULDBLOCK; then, 
the Eventlet hub steps in, unschedules your greenthread, finds an 
unblocked one, and lets it proceed. It's pretty good at servicing a 
bunch of concurrent connections and keeping the CPU busy.


On the other hand, when the socket is readable, then recvfrom() returns 
quickly (a few microseconds). The calling process was technically 
blocked, but the syscall is so fast that it hardly matters.


Now, when your greenthread tries to read from a file, that read() call 
doesn't return until the data is in your process's memory. This can take 
a surprisingly long time. If the data isn't in buffer cache and the 
kernel has to go fetch it from a spinning disk, then you're looking at a 
seek time of ~7 ms, and that's assuming there are no other pending 
requests for the disk.


There's no EWOULDBLOCK when reading from a plain file, either. If the 
file pointer isn't at EOF, then the calling process blocks until the 
kernel fetches data for it.


Back to Swift:

The Swift object server basically does two things: it either reads from 
a disk and writes to a socket or vice versa. There's a little HTTP 
parsing in there, but the vast majority of the work is shuffling bytes 
between network and disk. One Swift object server can service many 
clients simultaneously.


The problem is those pauses due to read(). If your process is servicing 
hundreds of clients reading from and writing to dozens of disks (in, 
say, a 48-disk 4U server), then all those little 7 ms waits are pretty 
bad for throughput. Now, a lot of the time, the kernel does some 
readahead so your read() calls can quickly return data from buffer 
cache, but there are still lots of little hitches.


But wait: it gets worse. Sometimes a disk gets slow. Maybe it's got a 
lot of pending IO requests, maybe its filesystem is getting close to 
full, or maybe the disk hardware is just starting to get flaky. For 
whatever reason, IO to this disk starts taking a lot longer than 7 ms on 
average; think dozens or hundreds of milliseconds. Now, every time your 
process tries to read from this disk, all other work stops for quite a 
long time. The net effect is that the object server's throughput 
plummets while it spends most of its time blocked on IO from that one 
slow disk.


Now, of course there's things we can do. The obvious one is to use a 
couple of IO threads per disk and push the blocking syscalls out 
there... and, in fact, Swift did that. In commit b491549, the object 
server gained a small threadpool for each disk[1] and started doing its 
IO there.


This worked pretty well for avoiding the slow-disk problem. Requests 
that touched the slow disk would back up, 

Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Samuel Cassiba
On Thu, Mar 31, 2016 at 2:40 PM, Emilien Macchi  wrote:

> Hi,
>
> OpenStack big tent has different projects for deployments: Puppet,
> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
> common.
> I propose we use the Cross-project day to meet and talk about our
> common topics: CI, doc, release, backward compatibility management,
> etc.
>
> Feel free to look at the proposed session and comment:
> https://etherpad.openstack.org/p/newton-cross-project-sessions
>
>
+1 on this. Looking forward to meeting people and discussing our common
pain points.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Samuel Cassiba
On Thu, Mar 24, 2016 at 11:25 AM, Ronald Bradford <m...@ronaldbradford.com>
wrote:

> Samuel,
>
> Could you detail when your IRC meeting is, [1] indicates it's on Tuesdays.
>
> [1] https://wiki.openstack.org/wiki/Meetings/EC2API
>
> Regards
>
> Ronald
>

> On Thu, Mar 24, 2016 at 1:36 PM, Samuel Cassiba <s...@cassiba.com> wrote:
>
>> On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets <rtikit...@gmail.com>
>> wrote:
>>
>>> Hi, team!
>>>
>>> If you remember, we've created a cookbook for ec2-api service. After
>>> last discussion I’ve refactored it, have added specs.
>>> The final version is located on cloudscaling github:
>>> https://github.com/cloudscaling/cookbook-openstack-ec2.
>>> How do we proceed to integrate our cookbook to your project?
>>>
>>> Regards,
>>> Anastasia
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> Hi Anastasia,
>>
>> That's great news! We'll have to go through the process of getting a new
>> repo added under our project. Would you be able to attend Monday's meeting
>> to discuss it further?
>>
>> Thanks,
>>
>> Samuel
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Hi Ronald,

Pardon the confusion. I was referring to the Chef OpenStack meeting[1],
which takes place on Mondays.

[1] https://wiki.openstack.org/wiki/Meetings/ChefCookbook

Thanks,

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Samuel Cassiba
On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets <rtikit...@gmail.com>
wrote:

> Hi, team!
>
> If you remember, we've created a cookbook for ec2-api service. After last
> discussion I’ve refactored it, have added specs.
> The final version is located on cloudscaling github:
> https://github.com/cloudscaling/cookbook-openstack-ec2.
> How do we proceed to integrate our cookbook to your project?
>
> Regards,
> Anastasia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Hi Anastasia,

That's great news! We'll have to go through the process of getting a new
repo added under our project. Would you be able to attend Monday's meeting
to discuss it further?

Thanks,

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Can swift identify user agent come from chrome browser?

2016-03-19 Thread Samuel Merritt

On 3/17/16 1:53 AM, Linpeimin wrote:

Hello, everyone.

I have config a web server (tengine) as a proxy server for swift, and
sent a GET request via a chrome browser in order to access swift
container. From the log file, it can be seen that web server has pass
the request to swift, but swift returns an unauthorized error. Log file
record like this:

Access logs of *tengine:*

10.74.167.183 - - [17/Mar/2016:16:30:03 +] "GET /auth/v1.0 HTTP/1.1"
401 131 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36" "-"

10.74.167.183 - - [17/Mar/2016:16:30:03 +] "GET /favicon.ico
HTTP/1.1" 401 649 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72
Safari/537.36" "-"

Proxy logs of *swift*:

Mar 17 15:12:27 localhost journal: proxy-logging 10.74.167.183
192.168.1.5 17/Mar/2016/15/12/27 GET /auth/v1.0 HTTP/1.0 401 -
Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36
- - 131 - tx21863381504d47098a73846d621fcbd0 - 0.0003 -

Mar 17 15:12:27 localhost journal: tempauth 10.74.167.183 192.168.1.5
17/Mar/2016/15/12/27 GET /auth/v1.0 HTTP/1.0 401 -
Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36
- - - - tx21863381504d47098a73846d621fcbd0 - 0.0013


It's the same value, just URL-encoded. Swift's access log is formatted 
as one record per line, with fields delimited by spaces. Since the 
user-agent string may contain spaces, it's escaped before logging so 
that the log formatting isn't broken.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] PTL Candidacy

2016-03-11 Thread Samuel Cassiba
Howdy,

I am announcing my candidacy as Chef OpenStack PTL for the Newton release
cycle.

In the Mitaka cycle, we've accomplished quite a bit, with an incredible
velocity.
The core cookbooks underwent a significant refactor, removing some 18k
lines of
code and refactoring another 4k lines. We also saw the introduction of
integration tests in our gates, as well as some third-party vendor support
in Liberty. With the initial round of refactoring completed for Mitaka, we
have
a more streamlined set of cookbooks upon which to build new features.

Being under the big tent is not easy, and we have great challenges in front
of us. My main goal for the Newton cycle is to further reduce the barrier to
entry to the cookbooks, so that we can more easily welcome new contributors
and hopefully more adoption.

- Documentation
I'd like to place an emphasis on more documentation. I believe that this
will enable downstream developers and operators to better uptake newer
releases
as well as contribute back to the project. Our users need better guidance
when
it comes to best practices when using the cookbooks.

- Continuous Integration
We've done a lot on CI, with the integration jobs, but we must also test
different deployment scenarios as proof of concept that a downstream user
can
take these same cookbooks and deploy a production-ready OpenStack cluster.

- Community engagement
I would like to further our collaboration with other projects, notably
OpenStack Infra and upstream packagers (Ubuntu Cloud Archive and RDO), but
also
other OpenStack projects like Heat, TripleO, Magnum, Tempest and
Documentation.
This collaboration makes the OpenStack ecosystem better by providing more
freedom of choice for users when deploying OpenStack. We must continue that
with coordination and maintaining strong lines of communication.

I would greatly appreciate your support for PTL, if you'll have me, and I'm
looking forward to growing the Chef OpenStack team.

The official elections review is here:
https://review.openstack.org/#/c/291967/

Samuel Cassiba (sc`)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg: Configuration Management tool for Openstack.

2016-03-10 Thread Samuel Cassiba
You can also use Chef for either a small-scale or production level
deployment.

Check out https://wiki.openstack.org/wiki/Chef/GettingStarted for a bit
more context, or if you just want to jump in head-first
https://github.com/openstack/openstack-chef-repo/blob/master/README.md

On Thu, Mar 10, 2016 at 1:58 AM, Ferhat Ozkasgarli 
wrote:

> If you are new to openstack, go with packstack. You will learn the basics
> quickly  with packstack.
>
> For production scale installations, you must consider Fuel Infra from
> Mirantis.
> On Mar 8, 2016 1:06 PM, "Shinobu Kinjo"  wrote:
>
>> Good post!
>>
>>
>> http://docs.openstack.org/developer/openstack-ansible/install-guide/index.html
>>
>> So ansible?, I'm puppeter though...
>>
>> Cheers,
>> S
>>
>> On Tue, Mar 8, 2016 at 7:49 PM, Gyorgy Szombathelyi
>>  wrote:
>> > Hi,
>> >
>> > Since I think Ansible is the best config management tool, you should
>> try the openstack-ansible installer, or our Ansible based one:
>> > https://github.com/DoclerLabs/openstack
>> >
>> > Br,
>> > György
>> >
>> >> -Original Message-
>> >> From: cool dharma06 [mailto:cooldharm...@gmail.com]
>> >> Sent: 2016 március 7, hétfő 8:17
>> >> To: openstack-dev@lists.openstack.org
>> >> Subject: [openstack-dev] Reg: Configuration Management tool for
>> >> Openstack.
>> >>
>> >> Hi all,
>> >>
>> >> i have the following questions in Openstack deployment.
>> >>
>> >> 1. i need some configuration management tool for OpenStack. After some
>> >> searches i got some tools like Puppet, Ansible, Chef, Fuel and
>> tripleO. i am
>> >> confused to which one to go.
>> >>
>> >> 2. And also i checked the Github repository for openstack-puppet, i
>> think its
>> >> not active.
>> >>
>> >> Suggest some tool for openstack. i am newbie for this large scale
>> deploment
>> >> and also for configuration management.
>> >>
>> >>
>> >> thanks & regards,
>> >> cooldharma06  .. :)
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> GitHub:
>> shinobu-x
>> Blog:
>> Life with Distributed Computational System based on OpenSource
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-08 Thread Samuel Bercovici
So this looks like only a database migration, right?

-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com] 
Sent: Tuesday, March 08, 2016 12:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Ok, for what it’s worth we have contributed our migration script: 
https://review.openstack.org/#/c/289595/ — please look at this as a starting 
point and feel free to fix potential problems…

Thanks,
German




On 3/7/16, 11:00 AM, "Samuel Bercovici" <samu...@radware.com> wrote:

>As far as I recall, you can specify the VIP in creating the LB so you will end 
>up with same IPs.
>
>-Original Message-
>From: Eichberger, German [mailto:german.eichber...@hpe.com]
>Sent: Monday, March 07, 2016 8:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Hi Sam,
>
>So if you have some 3rd party hardware you only need to change the 
>database (your steps 1-5) since the 3rd party hardware will just keep 
>load balancing…
>
>Now for Kevin’s case with the namespace driver:
>You would need a 6th step to reschedule the loadbalancers with the V2 
>namespace driver — which can be done.
>
>If we want to migrate to Octavia or (from one LB provider to another) it might 
>be better to use the following steps:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format 
>file into some scripts which recreate the load balancers with your 
>provider of choice —
>
>6. Run those scripts
>
>The problem I see is that we will probably end up with different VIPs 
>so the end user would need to change their IPs…
>
>Thanks,
>German
>
>
>
>On 3/6/16, 5:35 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
>
>>As for a migration tool.
>>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>>am in favor for the following process:
>>
>>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
>>Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>>make room to some custom modification for mapping between v1 and v2
>>models)
>>
>>What do you think?
>>
>>-Sam.
>>
>>
>>
>>
>>-Original Message-
>>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>>Sent: Friday, March 04, 2016 2:06 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Ok. Thanks for the info.
>>
>>Kevin
>>
>>From: Brandon Logan [brandon.lo...@rackspace.com]
>>Sent: Thursday, March 03, 2016 2:42 PM
>>To: openstack-dev@lists.openstack.org
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Just for clarity, V2 did not reuse tables, all the tables it uses are only 
>>for it.  The main problem is that v1 and v2 both have a pools resource, but 
>>v1 and v2's pool resource have different attributes.  With the way neutron 
>>wsgi works, if both v1 and v2 are enabled, it will combine both sets of 
>>attributes into the same validation schema.
>>
>>The other problem with v1 and v2 running together was only occurring when the 
>>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>>may actually have been fixed with some agent updates in neutron, since that 
>>is common code.  It needs to be tested out though.
>>
>>Thanks,
>>Brandon
>>
>>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>>> Just because you had thought no one was using it outside of a PoC doesn't 
>>> mean folks aren''t using it in production.
>>>
>>> We would be happy to migrate to Octavia. We were planning on doing just 
>>> that by running both v1 with haproxy namespace, and v2 with Octavia and 
>>> then pick off upgrading lb's one at a time, but the reuse of the v1 tables 
>>> really was an unfortunate decision that blocked that activity.
>>>
>>> We're still trying to figure out a path forward.
>>>
>>> We have an outage window next month. after that, it could be about 6 
>>> months before

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-08 Thread Samuel Bercovici
Same with Radware. Hence my proposal.

-Original Message-
From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
Sent: Tuesday, March 08, 2016 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi German,

>> So if you have some 3rd party hardware you only need to change the 
>> database (your steps 1-5) since the 3rd party hardware will just keep 
>> load balancing…

This is not the case with NetScaler it has to go through a Delete of V1 
followed by Create in V2 if a smooth migration is required. 

Thanks,
Vijay V.
-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com]
Sent: 08 March 2016 00:00
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
some scripts which recreate the load balancers with your provider of choice — 

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs… 

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici" <samu...@radware.com> wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>make room to some custom modification for mapping between v1 and v2
>models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6 
>> months before we could try a migration due to production load picking 
>> up for a while. I may just have to burn out all the lb's switch to 
>> v2, then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> ___

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Samuel Bercovici
As far as I recall, you can specify the VIP in creating the LB so you will end 
up with same IPs.

-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com] 
Sent: Monday, March 07, 2016 8:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
some scripts which recreate the load balancers with your provider of choice — 

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs… 

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici" <samu...@radware.com> wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>make room to some custom modification for mapping between v1 and v2 
>models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6 
>> months before we could try a migration due to production load picking 
>> up for a while. I may just have to burn out all the lb's switch to 
>> v2, then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> 
>> From: Eichberger, German [german.eichber...@hpe.com]
>> Sent: Thursday, March 03, 2016 12:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Kevin,
>>
>>  If we are offering a migration tool it would be namespace -> 
>> namespace (or maybe Octavia since [1]) - given the limitations nobody 
>> should be using the namespace driver outside a PoC so I am a bit 
>> confused why customers can't self migrate. With 3

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-06 Thread Samuel Bercovici
As for a migration tool.
Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I am 
in favor for the following process:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s)
2. Delete LBaaS v1 
3. Uninstall LBaaS v1
4. Install LBaaS v2
5. Import the data from 1 back over LBaaS v2 (need to allow moving from 
falvor1-->flavor2, need to make room to some custom modification for mapping 
between v1 and v2 models)

What do you think?

-Sam.




-Original Message-
From: Fox, Kevin M [mailto:kevin@pnnl.gov] 
Sent: Friday, March 04, 2016 2:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Ok. Thanks for the info.

Kevin

From: Brandon Logan [brandon.lo...@rackspace.com]
Sent: Thursday, March 03, 2016 2:42 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
v2's pool resource have different attributes.  With the way neutron wsgi works, 
if both v1 and v2 are enabled, it will combine both sets of attributes into the 
same validation schema.

The other problem with v1 and v2 running together was only occurring when the 
v1 agent driver and v2 agent driver were both in use at the same time.  This 
may actually have been fixed with some agent updates in neutron, since that is 
common code.  It needs to be tested out though.

Thanks,
Brandon

On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
> Just because you had thought no one was using it outside of a PoC doesn't 
> mean folks aren''t using it in production.
>
> We would be happy to migrate to Octavia. We were planning on doing just that 
> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
> an unfortunate decision that blocked that activity.
>
> We're still trying to figure out a path forward.
>
> We have an outage window next month. after that, it could be about 6 
> months before we could try a migration due to production load picking 
> up for a while. I may just have to burn out all the lb's switch to v2, 
> then rebuild them by hand in a marathon outage :/
>
> And then there's this thingy that also critically needs fixing:
> https://bugs.launchpad.net/neutron/+bug/1457556
>
> Thanks,
> Kevin
> 
> From: Eichberger, German [german.eichber...@hpe.com]
> Sent: Thursday, March 03, 2016 12:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> Kevin,
>
>  If we are offering a migration tool it would be namespace -> 
> namespace (or maybe Octavia since [1]) - given the limitations nobody 
> should be using the namespace driver outside a PoC so I am a bit 
> confused why customers can't self migrate. With 3rd party Lbs I would 
> assume vendors proving those scripts to make sure their particular 
> hardware works with those. If you indeed need a migration from LBaaS 
> V1 namespace -> LBaaS V2 namespace/Octavia please file an RfE with 
> your use case so we can discuss it further...
>
> Thanks,
> German
>
> [1] https://review.openstack.org/#/c/286380
>
> From: "Fox, Kevin M" <kevin@pnnl.gov<mailto:kevin@pnnl.gov>>
> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstac
> k.org>>
> Date: Wednesday, March 2, 2016 at 5:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstac
> k.org>>
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> no removal without an upgrade path. I've got v1 LB's and there still isn't a 
> migration script to go from v1 to v2.
>
> Thanks,
> Kevin
>
>
> 
> From: Stephen Balukoff 
> [sbaluk...@bluebox.net<mailto:sbaluk...@bluebox.net>]
> Sent: Wednesday, March 02, 2016 4:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> I am also on-board with removing LBaaS v1 as early as possible in the Newton 
> cycle.
>
> On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> <samu...@radware.com

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-02 Thread Samuel Bercovici
Thank you all for your response.

In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
mature, it makes sense to remove LBaaS v1 in Newton.
Do we want do discuss an upgrade process in the summit?

-Sam.


From: Bryan Jones [mailto:jone...@us.ibm.com]
Sent: Wednesday, March 02, 2016 5:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

And as for the Heat support, the resources have made Mitaka, with additional 
functional tests on the way soon.

blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com<mailto:jone...@us.ibm.com>


- Original message -
From: Justin Pomeroy 
<jpom...@linux.vnet.ibm.com<mailto:jpom...@linux.vnet.ibm.com>>
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Cc:
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
Date: Wed, Mar 2, 2016 9:36 AM

As for the horizon support, much of it will make Mitaka.  See the blueprint and 
gerrit topic:

https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the 
latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are 
interested in maintaining it, there is a third option, which is that the code 
can be maintained in a separate repo, by a separate team (with or without the 
current core team’s blessing.)

No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which path forward 
is being discussed at today’s Octavia meeting, or feedback is of course 
welcomed here, in IRC, or anywhere.

Thanks,
doug

On Mar 2, 2016, at 7:06 AM, Samuel Bercovici 
<samu...@radware.com<mailto:samu...@radware.com>> wrote:

Hi,

I have just notices the following change: 
https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?

While LBaaS v2 is becoming the default, I think that we should have the 
following before we replace LBaaS v1:
1.  Horizon Support – was not able to find any real activity on it
2.  HEAT Support – will it be ready in Mitaka?

Do you have any other items that are needed before we get rid of LBaaS v1?

-Sam.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?

2016-03-02 Thread Samuel Bercovici
Hi,

I have just notices the following change: 
https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?

While LBaaS v2 is becoming the default, I think that we should have the 
following before we replace LBaaS v1:

1.  Horizon Support - was not able to find any real activity on it

2.  HEAT Support - will it be ready in Mitaka?

Do you have any other items that are needed before we get rid of LBaaS v1?

-Sam.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >