Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread James E. Blair
Monty Taylor  writes:

> On 08/01/2018 12:45 AM, Ian Wienand wrote:
>> Hello,
>> I'd suggest to start, people with an interest in a channel can request
>> +r from an IRC admin in #openstack-infra and we track it at [2]
>
> To mitigate the pain caused by +r - we have created a channel called
> #openstack-unregistered and have configured the channels with the +r
> flag to forward people to it. We have also set an entrymsg on
> #openstack-unregistered to:
>
> "Due to a prolonged SPAM attack on freenode, we had to configure
> OpenStack channels to require users to be registered. If you are here,
> you tried to join a channel without being logged in. Please see
> https://freenode.net/kb/answer/registration for instructions on
> registration with NickServ, and make sure you are logged in."
>
> So anyone attempting to join a channel with +r should get that message.

It turns out this was a very popular option, so we've gone ahead and
performed this for all channels registered with accessbot.  If you're in
a channel that still needs this, please add it to the accessbot channel
list[1] and let us know in #openstack-infra.

Also, if folks would be willing to lurk in #openstack-unregistered to
help anyone who ends up there by surprise and is unfamiliar with how to
register with nickserv, that would be great.

-Jim

[1] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/accessbot/channels.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox

2018-07-09 Thread James E. Blair
Doug Hellmann  writes:

> Excerpts from Zane Bitter's message of 2018-07-09 11:04:28 -0400:
>> On 05/07/18 16:46, Doug Hellmann wrote:
>> > I have a governance patch up [1] to change the project-testing-interface
>> > (PTI) for building documentation to restore the use of tox.
>> > 
>> > We originally changed away from tox because we wanted to have a
>> > single standard command that anyone could use to build the documentation
>> > for a project. It turns out that is more complicated than just
>> > running sphinx-build in a lot of cases anyway, because of course
>> > you have a bunch of dependencies to install before sphinx-build
>> > will work.
>> 
>> Is this the main reason? If we think we made the wrong call (i.e. 
>> everyone has to set up a virtualenv and install doc/requirements.txt 
>> anyway so we should just make them use tox even if they are not Python 
>> projects), then I agree it makes sense to fix it even though we only 
>> _just_ finished telling people it would be the opposite way.
>
> Yes, we made the wrong call when we set the PTI to not use tox for these
> cases.
>
>> > Updating the job that uses sphinx directly to run under python 3,
>> > while allowing the transition to be self-testing, was going to
>> > require writing some extra complexity to look at something in the
>> > repository to decide what version of python to use.  Since tox
>> > handles that for us by letting us set basepython in the virtualenv
>> > configuration, it seemed more straightforward to go back to using
>> > tox.
>> 
>> Wouldn't another option be to have separate Zuul jobs for Python 3 and 
>> Python 2-based sphinx builds? Then the switchover would still be 
>> self-testing.
>> 
>> I'd rather do that if this is the main problem we're trying to solve, 
>> rather than reverse course.
>
> These jobs run on tag events, which are not "branch aware" (tags
> can be on 0 or more branches at the same time). That means we cannot
> have different versions of the job running for different branches.
>
> Instead we need 1 job, which uses data inside the repository to
> decide exactly what to do. Instead of writing a new, more complicated,
> job to look at a flag file or other settings to decide whether to
> run sphinx under python 2 or 3, it will be simpler to go back to
> using the old existing tox-based job and to use the tox configuration
> to control the version of python. Using the tox job also has the
> benefit of fixing the tox-siblings issue for projects like neutron
> plugins that need neutron installed in order to generate their
> documentation. So we fix 2 problems with 1 change.
>
> We actually have a similar problem for the release job, but in that
> case we don't need tox because we don't need to install any
> dependencies in order to build the artifacts.  I have tested building
> sdists and wheels from every repo with a setup.py and did not find
> any failures related to using python 3, so we can just switch
> everyone over to use the new job.

Indeed, this is a situation where in many cases our intuition collides
with git's implementation.  We've always had this restriction with Zuul
(we can cause different jobs to run for different tags, but we can only
do so by matching the name of the tag, not the name of the branch that
people associate with the tag).  If we were very consistent about
release version numbers and branches across projects, we could write
some configuration which ran python2 jobs on some releases and python3
jobs on others.  But we aren't in that position, and doing so would
require a jumble of regexes, different for each project.

In Zuul v3, since much of the configuration is in-repo, the desire to
alter tag/release jobs based on the content in-repo is even closer to
the surface.  So the desire to handle this situation better is growing,
and I think stands on its own merit.  To that end, we've started
exploring some changes to Zuul in that direction.  One of them is here:
https://review.openstack.org/578557

But, even if we do land that change, I think the PTI change that Doug is
proposing is the best thing for us to do in this situation.  We made the
PTI so that we have a really simple interface and line of demarcation
where we say that, collectively, we want all projects to be able to
build docs, and we're going to build a bunch of automation around that,
but the PTI is the boundary between that automation and the in-repo
content.  It has served us very well through a number of changes to how
we run unit tests.  The fact that we've gone through far fewer changes
to how docs are built has perhaps led us to think that we didn't need
the layer of abstraction that tox provided us.  However, as soon as we
removed it, we encountered a situation where, in fact, it would have
insulated us.

Put another way, I think the spirit of the PTI is about finding the
right place where the automation that we build for all the projects
stops, and the project-specific implementation begins.  Facilitating a

[openstack-dev] [infra] Behavior change in Zuul post pipeline

2018-06-26 Thread James E. Blair
Hi,

We recently changed the behavior* of the post pipeline in Zuul to only
run jobs for the most recently merged changes on each project's
branches.  If you were relying on the old behavior where jobs ran on
every merged change, let us know, we can make a new pipeline for that.
But for the typical case, this should result in some improvements:

1) We waste fewer build resources building intermediate build artifacts
(e.g., documentation for a version which is already obsoleted by the
change which landed after it).

2) Races in artifact build jobs will no longer result in old versions of
documentation being published because they ran on a slightly faster node
than the newer version.

If you observe any unexpected behavior as the result of this change,
please let us know in #openstack-infra.

-Jim

* The thing which implements this behavior in Zuul is the
  "supercedent"** pipeline manager[1].  Zuul has had, since the initial
  commit six years ago, a pluggable system for controlling the behavior
  in its pipelines.  To date, we have only had two pipeline managers:
  "dependent" which controls the gate, and "independent" which controls
  everything else.

[1] 
https://zuul-ci.org/docs/zuul/user/config.html#value-pipeline.manager.supercedent

** It may or may not be named after anyone you know.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs

2018-06-15 Thread James E. Blair
Doug Hellmann  writes:

> Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900:

>> Yes, It will not be set on LIBS_FROM_GIT as we did not set it
>> explicitly. But gate running on any repo does run job on current
>> change set of that repo which is nothing but "master + current patch
>> changes" . For example, any job running on oslo.config patch will
>> take oslo.config source code from that patch which is "master +
>> current change". You can see the results in this patch -
>> https://review.openstack.org/#/c/575324/ . Where I deleted a module
>> and gate jobs (including tempest-full-py3) fails as they run on
>> current change set of neutron-lib code not on pypi version(which
>> would pass the tests).
>
> The tempest-full-py3 job passed for that patch, though. Which seems to
> indicate that the neutron-lib repository was not used in the test job,
> even though it was checked out.

The automatic generation of LIBS_FROM_GIT only includes projects which
appear in required-projects.  So in this case neutron-lib does not
appear in LIBS_FROM_GIT[1], so the change is not actually tested by that
job.

Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work,
but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic
generation, so more complex jobs (which may want to inherit from that
job but need multiple libraries) would also have to override
LIBS_FROM_GIT and add the full set of projects.

The code that automatically sets LIBS_FROM_GIT is fairly simple and
could be modified to automatically add the project of the change under
test.  We could do that for all jobs, or we could add a flag which
toggles the behavior.  The question to answer here is: is there ever a
case where a devstack job should not install the change under test from
source?  I think the answer is no, and even though under Zuul v2
devstack-gate didn't automatically add the project under test to
LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB
templating which did.

A further thing to consider is what the desired behavior is for a series
of changes.  If a change to neutron-lib depends on a change to
oslo.messaging, when the forward testing job runs on neutron-lib, should
it also add oslo.messaging to LIBS_FROM_GIT?  That's equally easy to
implement (but would certainly need a flag as it essentially would add
everything in the change series to LIBS_FROM_GIT which defeats the
purpose of the restriction for the jobs which need it), but I honestly
am not certain what's desired.

For each type of project (service, lib, lib-group (eg oslo.messaging)),
what do we want to test from git vs pypi?  How many jobs are needed to
accomplish that?  What should happen with a change series with other
projects in it?

[1] 
http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] Upcoming Zuul behavior change for files and irrelevant-files

2018-06-07 Thread James E. Blair
Hi,

Earlier[1][2], we discussed proposals to make files and irrelevant-files
easier to use -- particularly ways to make them overridable.  We settled
on an approach, and it is now implemented.  We plan on upgrading
OpenStack's Zuul to the new behavior on Monday, June 11, 2018.

To summarize the change:

  Files and irrelevant-files are treated as overwriteable attributes and
  evaluated after branch-matching variants are combined.
  
  * Files and irrelevant-files are overwritten, so the last value
encountered when combining all the matching variants (looking only at
branches) wins.
  * It's possible to both reduce and expand the scope of jobs, but the
user may need to manually copy values from a parent or other variant
in order to do so.
  * It will no longer be possible to alter a job attribute by adding a
variant with only a files matcher -- in all cases files and
irrelevant-files are used solely to determine whether the job is run,
not to determine whether to apply a variant.

This is a behavior change to Zuul that is not possible[3] to support in
a backwards compatible way.  That means that on Monday, there may be
sudden alterations to the set of jobs which run on changes.  Considering
that many of us can barely predict what happens at all when multiple
irrelevant-files stanzas enter the picture, it's not possible[4] to say
in advance exactly what the changes will be.

Suffice it to say that, on Monday, if some jobs you were expecting to
run on a change don't, or some jobs you were not expecting to run do,
then you will need to alter the files or irrelevant-files matchers on
those jobs.  Hopefully the new approach is sufficiently intuitive that
corrective changes will be simple to make.  Jobs which have no more than
one files or irrelevant-files attribute involved in their construction
(likely the bulk of the jobs out there) are unlikely to need any
immediate changes.

Please let us know in #openstack-infra if you encounter any problems and
we'll be happy to help.  Hopefully after we cross this speedbump we'll
find the files and irrelevant-files matchers much more useful.

-Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130074.html
[2] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-May/000397.html
[3] At least, not possible with a reasonable amount of effort.
[4] Of course it's possible but only with an unhealthy amount of beer.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Winterscale: a proposal regarding the project infrastructure

2018-05-31 Thread James E. Blair
Joshua Hesketh  writes:

> So the "winterscale infrastructure council"'s purview is quite limited in
> scope to just govern the services provided?
>
> If so, would you foresee a need to maintain some kind of "Infrastructure
> council" as it exists at the moment to be the technical design body?

For the foreseeable future, I think the "winterscale infrastructure
team" can probably handle that.  If it starts to sprawl again, we can
make a new body.

> Specifically, wouldn't we still want somewhere for the "winterscale
> infrastructure team" to be represented and would that expand to any
> infrastructure-related core teams?

Can you elaborate on this?  I'm not following.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure

2018-05-30 Thread James E. Blair
Doug Hellmann  writes:

>> >> * Establish a "winterscale infrastructure council" (to be renamed) which
>> >>   will govern the services that the team provides by vote.  The council
>> >>   will consist of the PTL of the winterscale infrastructure team and one
>> >>   member from each official OpenStack Foundation project.  Currently, as
>> >>   I understand it, there's only one: OpenStack.  But we expect kata,
>> >>   zuul, and others to be declared official in the not too distant
>> >>   future.  The winterscale representative (the PTL) will have
>> >>   tiebreaking and veto power over council decisions.
>> >
>> > That structure seems sound, although it means the council is going
>> > to be rather small (at least in the near term).  What sorts of
>> > decisions do you anticipate needing to be addressed by this council?
>> 
>> Yes, very small.  Perhaps we need an interim structure until it gets
>> larger?  Or perhaps just discipline and agreement that the two people on
>> it will consult with the necessary constituencies and represent them
>> well?
>
> I don't want to make too much out of it, but it does feel a bit odd
> to have a 2 person body where 1 person has the final decision power. :-)
>
> Having 2 people per official team (including winterscale) would
> give us more depth of coverage overall (allowing for quorum when
> someone is on vacation, for example).  In the short term, it also
> has the benefit of having twice as many people involved.

That's a good idea, and we can scale it down later if needed.

>> I expect the council not to have to vote very often.  Perhaps only on
>> substantial changes to services (bringing a new offering online,
>> retiring a disused offering, establishing parameters of a service).  As
>> an example, the recent thread on "terms of service" would be a good
>> topic for the council to settle.
>
> OK, so not on every change but on the significant ones that might affect
> more than one project. Ideally any sort of conflict would be worked out
> in advance, but it's good to have the process in place to resolve
> problems before they come up.

Yes, and like most things, I think the biggest value will be in having
the forum to propose changes, discuss them, and collect feedback from
all members of participating projects (not just voting members).
Hopefully in most decisions, the votes are just a formality which
confirms the consensus (but if there isn't consensus, we still need to
be able to make a decision).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure

2018-05-30 Thread James E. Blair
Doug Hellmann  writes:

>> * Move many of the git repos currently under the OpenStack project
>>   infrastructure team's governance to this new team.
>
> I'm curious about the "many" in that sentence. Which do you anticipate
> not moving, and if this new team replaces the existing team then who
> would end up owning the ones that do not move?

There are a lot.  Generally speaking, I think most of the custom
software, deployment tooling, and configuration would move.

An example of something that probably shouldn't move is
"openstack-zuul-jobs".  We still need people that are concerned with how
OpenStack uses the winterscale service.  I'm not sure whether that
should be its own team or should those functions get folded into other
teams.

>> * Establish a "winterscale infrastructure council" (to be renamed) which
>>   will govern the services that the team provides by vote.  The council
>>   will consist of the PTL of the winterscale infrastructure team and one
>>   member from each official OpenStack Foundation project.  Currently, as
>>   I understand it, there's only one: OpenStack.  But we expect kata,
>>   zuul, and others to be declared official in the not too distant
>>   future.  The winterscale representative (the PTL) will have
>>   tiebreaking and veto power over council decisions.
>
> That structure seems sound, although it means the council is going
> to be rather small (at least in the near term).  What sorts of
> decisions do you anticipate needing to be addressed by this council?

Yes, very small.  Perhaps we need an interim structure until it gets
larger?  Or perhaps just discipline and agreement that the two people on
it will consult with the necessary constituencies and represent them
well?

I expect the council not to have to vote very often.  Perhaps only on
substantial changes to services (bringing a new offering online,
retiring a disused offering, establishing parameters of a service).  As
an example, the recent thread on "terms of service" would be a good
topic for the council to settle.

>>   (This is structured loosely based on the current Infrastructure
>>   Council used by the OpenStack Project Infrastructure Team.)
>> 
>> None of this is obviously final.  My goal here is to give this effort a
>> name and a starting point so that we can discuss it and make progress.
>> 
>> -Jim
>> 
>
> Thanks for starting this thread! I've replied to both mailing lists
> because I wasn't sure which was more appropriate. Please let me
> know if I should focus future replies on one list.

Indeed, perhaps we should steer this toward openstack-dev now.  I'll
drop openstack-infra from future replies.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Winterscale: a proposal regarding the project infrastructure

2018-05-30 Thread James E. Blair
Hi,

With recent changes implemented by the OpenStack Foundation to include
projects other than "OpenStack" under its umbrella, it has become clear
that the "Project Infrastructure Team" needs to change.

The infrastructure that is run for the OpenStack project is valued by
other OpenStack Foundation projects (and beyond).  Our community has not
only produced an amazing cloud infrastructure system, but it has also
pioneered new tools and techniques for software development and
collaboration.

For some time it's been apparent that we need to alter the way we run
services in order to accommodate other Foundation projects.  We've been
talking about this informally for at least the last several months.  One
of the biggest sticking points has been a name for the effort.  It seems
very likely that we will want a new top-level domain for hosting
multiple projects in a neutral environment (so that people don't have to
say "hosted on OpenStack's infrastructure").  But finding such a name is
difficult, and even before we do, we need to talk about it.

I propose we call the overall effort "winterscale".  In the best
tradition of code names, it means nothing; look for no hidden meaning
here.  We won't use it for any actual services we provide.  We'll use it
to refer to the overall effort of restructuring our team and
infrastructure to provide services to projects beyond OpenStack itself.
And we'll stop using it when the restructuring effort is concluded.

This is my first proposal: that we acknowledge this effort is underway
and name it as such.

My second proposal is an organizational structure for this effort.
First, some goals:

* The infrastructure should be collaboratively run as it is now, and
  the operational decisions should be made by the core reviewers as
  they are now.

* Issues of service definition (i.e., what services we offer and how
  they are used) should be made via a collaborative process including
  the infrastructure operators and the projects which use it.

To that end, I propose that we:

* Work with the Foundation to create a new effort independent of the
  OpenStack project with the goal of operating infrastructure for the
  wider OpenStack Foundation community.

* Work with the Foundation marketing team to help us with the branding
  and marketing of this effort.

* Establish a "winterscale infrastructure team" (to be renamed)
  consisting of the current infra-core team members to operate this
  effort.

* Move many of the git repos currently under the OpenStack project
  infrastructure team's governance to this new team.

* Establish a "winterscale infrastructure council" (to be renamed) which
  will govern the services that the team provides by vote.  The council
  will consist of the PTL of the winterscale infrastructure team and one
  member from each official OpenStack Foundation project.  Currently, as
  I understand it, there's only one: OpenStack.  But we expect kata,
  zuul, and others to be declared official in the not too distant
  future.  The winterscale representative (the PTL) will have
  tiebreaking and veto power over council decisions.

  (This is structured loosely based on the current Infrastructure
  Council used by the OpenStack Project Infrastructure Team.)

None of this is obviously final.  My goal here is to give this effort a
name and a starting point so that we can discuss it and make progress.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Relocate jobs from openstack/zun to openstack/zun-tempest-plugin

2018-05-16 Thread James E. Blair
Hongbin Lu  writes:

> The goal of those patches is to move the job definitions and playbooks from
> openstack/zun to openstack/zun-tempest-plugin. The advantages of such
> change are as following:
>
> * Make job definitions closer to tempest test cases so that it is optimal
> for development and code reviews workflow. For example, sometime, we can
> avoid to split a patch into two repos in order to add a simple tempest test
> case.
> * openstack/zun is branched and openstack/zun-tempest-plugin is branchless.
> Zuul job definitions seem to fit better into branchless context.
> * It saves us the overhead to backport job definitions to stable branch.
> Sometime, missing a backport might lead to gate breakage and blocking
> development workflow.

Just a minor clarification: it's not always the case that branchless is
better.

Jobs which operate on repos that are branched are likely to be easier to
work with in the long run, as whatever configuration is specific to the
branch appears on that branch, instead of somewhere else.

Further, there shouldn't be a need to backport changes once the initial
jobs are set up.  In the future, when you branch master to stable/foo,
you'll automatically get a copy of the job that's appropriate for that
point in time, and you only need to update it if you're already updating
the software on that branch.  Older versions of jobs on stable branches
can continue to use their old configuration.

For jobs which should perform the same function on all branches, it is
easier to have those defined in branchless repos.  But in either case,
you can accomplish the same thing without moving jobs.  In a branched
repo, you can add a "branches: .*" matcher, and in a branchless repo,
you can add multiple variants for each branch.

The new v3-native devstack jobs are branched, and are defined in the
devstack repo.  They define how to set up devstack for each branch.  But
the tempest jobs (which build on top of the devstack jobs), are not
branched (just like tempest), since they are designed to run the same
way on all branches.

I don't know enough about the situation to recommend one way or the
other for Zun.  But I do want to emphasize that the best answer depends
on the circumstances.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Jeremy Stanley <fu...@yuggoth.org> writes:

> On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote:
> [...]
>> We're also talking about making a new kind of job which can continue to
>> run after it's "finished" so that you could use it to do something like
>> host a container registry that's used by other jobs running on the
>> change.  We don't have that feature yet, but if we did, would you prefer
>> to use that instead of the intermediate swift storage?
>
> If the subsequent jobs depending on that one get nodes allocated
> from the same provider, that could solve a lot of the potential
> network performance risks as well.

That's... tricky.  We're *also* looking at affinity for buildsets, and
I'm optimistic we'll end up with something there eventually, but that's
likely to be a more substantive change and probably won't happen as
soon.  I do agree it will be nice, especially for use cases like this.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Bogdan Dobrelya  writes:

> * check out testing depends-on things,

(Zuul should have done this for you, but yes.)

> * build repos and all tripleo docker images from these repos,
> * upload into a swift container, with an automatic expiration set, the
> de-duplicated and compressed tarball created with something like:
>   # docker save $(docker images -q) | gzip -1 > all.tar.xz
> (I expect it will be something like a 2G file)
> * something similar for DLRN repos prolly, I'm not an expert for this part.
>
> Then those stored artifacts to be picked up by the next step in the
> graph, deploying undercloud and overcloud in the single step, like:
> * fetch the swift containers with repos and container images
> * docker load -i all.tar.xz
> * populate images into a local registry, as usual
> * something similar for the repos. Includes an offline yum update (we
> already have a compressed repo, right? profit!)
> * deploy UC
> * deploy OC, if a job wants it
>
> And if OC deployment brought into a separate step, we do not need
> local registries, just 'docker load -i all.tar.xz' issued for
> overcloud nodes should replace image prep workflows and registries,
> AFAICT. Not sure with the repos for that case.
>
> I wish to assist with the upstream infra swift setup for tripleo, and
> that plan, just need a blessing and more hands from tripleo CI squad
> ;)

That sounds about right (at least the Zuul parts :).

We're also talking about making a new kind of job which can continue to
run after it's "finished" so that you could use it to do something like
host a container registry that's used by other jobs running on the
change.  We don't have that feature yet, but if we did, would you prefer
to use that instead of the intermediate swift storage?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Bogdan Dobrelya  writes:

> Added a few more patches [0], [1] by the discussion results. PTAL folks.
> Wrt remaining in the topic, I'd propose to give it a try and revert
> it, if it proved to be worse than better.
> Thank you for feedback!
>
> The next step could be reusing artifacts, like DLRN repos and
> containers built for patches and hosted undercloud, in the consequent
> pipelined jobs. But I'm not sure how to even approach that.
>
> [0] https://review.openstack.org/#/c/568536/
> [1] https://review.openstack.org/#/c/568543/

In order to use an artifact in a dependent job, you need to store it
somewhere and retrieve it.

In the parent job, I'd recommend storing the artifact on the log server
(in an "artifacts/" directory) next to the job's logs.  The log server
is essentially a time-limited artifact repository keyed on the zuul
build UUID.

Pass the URL to the child job using the zuul_return Ansible module.

Have the child job fetch it from the log server using the URL it gets.

However, don't do that if the artifacts are very large -- more than a
few MB -- we'll end up running out of space quickly.

In that case, please volunteer some time to help the infra team set up a
swift container to store these artifacts.  We don't need to *run*
swift -- we have clouds with swift already.  We just need some help
setting up accounts, secrets, and Ansible roles to use it from Zuul.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Overriding project-templates in Zuul

2018-05-02 Thread James E. Blair
Joshua Hesketh  writes:

>>
>> I think in actuality, both operations would end up as intersections:
>>
>>     ===  ===
>> Matcher   Template  Project  Result
>>     ===  ===
>> files ABBC   B
>> irrelevant-files  ABBC   B
>>     ===  ===
>>
>> So with the "combine" method, it's always possible to further restrict
>> where the job runs, but never to expand it.
>
> Ignoring the 'files' above, in the example of 'irrelevant-files' haven't
> you just combined the results to expand when it runs? ie, A and C are /not/
> excluded and therefore the job will run when there are changes to A or C?
>
> I would expect the table to be something like:
>     ===  ===
> Matcher   Template  Project  Result
>     ===  ===
> files ABBC   B
> irrelevant-files  ABBC   ABC
>     ===  ===

Sure, we'll go with that.  :)

>> > So a job with "files: tests/" and "irrelevant-files: docs/" would do
>> > whatever it is that happens when you specify both.
>>
>> In this case, I'm pretty sure that would mean it reduces to just "files:
>> tests/", but I've never claimed to understand irrelevant-files and I
>> won't start now.
>
> Yes, I think you are right that this would reduce to that. However, what
> about the use case of:
>   files: tests/*
>   irrelevant-files: tests/docs/*
>
> I could see a use case where both of those would be helpful. Yes you could
> describe that as one regex but to the end user the above may be expected to
> work. Unless we make the two options mutually exclusive I feel like this is
> a feature we should support. (That said, it's likely a separate
> feature/functionality than what is being described now).

Today, that means: run the job if a file in tests/ is changed AND any
file outside of tests/docs/* is changed.  A change to tests/foo matches
the irrelevant-files matcher, and also the files matcher, so it runs.  A
change to tests/docs/foo matches the files matcher but not the
irrelevant-files matcher, so it doesn't run.  I really hope I got that
right.  Anyway, that is an example of something that's possible to
express with both.

I lumped in the idea of pairing files/irrelevant-files with Proposal 2
because I thought that being able to override them is key, and switching
from one to the other was part of that, and, to be honest, I don't think
people should ever combine them because it's hard enough to deal with
one, but maybe that's too much of an implicit behavior change, and
instead we should separate that out and consider it as its own change
later.  I believe a user could still stop a the matchers by saying
"files: .*" and "irrelevant-files: ^$" in the project-local variant.

Let's revise Proposal #2 to omit that:

Proposal 2: Files and irrelevant-files are treated as overwriteable
attributes and evaluated after branch-matching variants are combined.

* Files and irrelevant-files are overwritten, so the last value
  encountered when combining all the matching variants (looking only at
  branches) wins.
* It's possible to both reduce and expand the scope of jobs, but the
  user may need to manually copy values from a parent or other variant
  in order to do so.
* It will no longer be possible to alter a job attribute by adding a
  variant with only a files matcher -- in all cases files and
  irrelevant-files are used solely to determine whether the job is run,
  not to determine whether to apply a variant.

> Anyway, I feel like Proposal #2 is more how I would expect the system to
> behave.
>
> I can see an argument for combining the results (and feel like you could
> evaulate that at the end after combining the branch-matching variants) to
> give something like:
>     ===  ===
> Matcher   Template  Project  Result
>     ===  ===
> files ABBC   ABC
> irrelevant-files  ABBC   ABC
>     ===  ===
>
> However, that gives the user no way to remove a previously listed option.
> Thus overwriting may be the better solution (ie proposal #2 as written)
> unless we want to explore the option of allowing a syntax that says
> "extend" or "overwrite".
>
> Yours in hoping that made sense,
> Josh

As much as anything with irrelevant-files does, yes.  :)

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Overriding project-templates in Zuul

2018-05-01 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> So a job with "files: tests/" and "irrelevant-files: docs/" would
> never run because it's impossible to satisfy both.

Jeremy pointed out in IRC that that's not what would happen.  So... let
me rephrase that:

> So a job with "files: tests/" and "irrelevant-files: docs/" would do 
> whatever it is that happens when you specify both.

In this case, I'm pretty sure that would mean it reduces to just "files:
tests/", but I've never claimed to understand irrelevant-files and I
won't start now.

Anyway, the main point is that Proposal 1 doesn't change the current
behavior which is "everything must match" and Proposal 2 does, meaning
you only get one or the other.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Overriding project-templates in Zuul

2018-05-01 Thread James E. Blair
Joshua Hesketh  writes:

> I might be misunderstanding at which point a job is chosen to be ran and
> therefore when it's too late to dissuade it. However, if possible, would it
> make more sense for the project-local copy of a job to overwrite the
> supplied files and irrelevant-files? This would allow a project to run a
> job when it otherwise doesn't match.

Imagine that a project with branches has a job added via a template.

project-config/zuul.yaml@master:
- job:
name: my-job
vars: {jobvar: true}

- project-template:
name: myjobs
check:
  jobs:
- my-job:
vars: {templatevar: true}

project/zuul.yaml@master:
- project:
templates:
  - myjobs
check:
  jobs:
- my-job:
vars: {projectvar: true}

project/zuul.yaml@stable:
- project:
templates:
  - myjobs
check:
  jobs:
- my-job:
vars: {projectvar: true}

The resulting project config is:

- project:
jobs:
  - my-job (branches: master; project-local job)
  - my-job (branches: master; project-template job)
  - my-job (branches: stable; project-local job)
  - my-job (branches: stable; project-template job)

When Zuul decides what to run, it goes through each of those in order,
evaluates their matchers, and pulls in parents and their variants for
each that matches.  So a change on the master branch would collect the
following variants to apply:

  my-job (branch: master; project-local job)
my-job (job)
  base (job)
  my-job (branch: master; project-template job)
my-job (job)
  base (job)

It would then apply them in this order:

  base (job)
  my-job (job)
  my-job (branch: master; project-template job)
  my-job (branch: master; project-local job)

To further restrict a project-local job with a "files:" matcher would
cause the project-local job not to match, but the project-template job
will still match, so the job gets run.

That's the situation we have today, which is what I meant by "it's too
late to dissuade it".

Regarding the suggestion to overwrite it, we would need to decide which
of the possible variants to overwrite.  Keep in mind that there are 3
independent matchers operating on all the variants (branches, files,
irrelevant-files).  Does a project-local job with a "files:" matcher
overwrite all of the variants?  Just the ones which match the same
branch?  That would probably be the most reasonable thing to do.

In my mind, that effectively splits the matchers into two categories:
branch matchers, and file matchers.  And they would behave differently.

Zuul could collect the variants as above, considering only the branch
matchers.  It could then apply all of the variants in the normal manner,
treating files and irrelevant-files as normal attributes which can be
overwritten.  Then, once it has composed the job to run based on all the
matching variants, it would only *then* evaluate the files matchers.  If
they don't match, then it would not run the job after all.

I think that's a very reasonable way to solve the issue as well, and I
believe it would match people's expectations.  Ultimately, the outcome
will be very similar to the proposal I made except that rather than
being combined, the matchers will be overwritten.  That means that if
you want to expand the set of irrelevant-files for a job, you would have
to copy the set from the parent.

There's one other aspect to consider -- it's possible to create a job
like this:

- job:
name: doc-job

- jobs:
name: doc-job
files: docs/index.rst
  vars: {rebuild_index: true}

Which means: there's a normal docs job with no variables, but if
docs/index.rst is changed, set the rebuild_index variable to true.
Either approach (combine vs overwrite) eliminates the ability to do this
within a project or project-template stanza.  But the "combine" approach
still lets us do this at the job level.  We could still support this in
the overwrite approach, however, I think it might be simpler to work
with if we eliminated it as well and just always treated files and
irrelevant-files matchers as overwriteable attributes.  It would no
longer be possible to implement the above example, but I'm not sure it's
that useful anyway?

> What happens when something is in both files and irrelevant-files? If the
> project-template is trying to say A is in 'files', but the project-local
> says A is in 'irrelevant-files', should that overwrite it?

I think my statement and table below was erroneous:

>> This effectively causes the "files" and "irrelevant-files" attributes on
>> all of the project-local job definitions matching a given branch to be
>> combined.  The combination of multiple files matchers behaves as a
>> union, and irrelevant-files matchers as an intersection.
>>
>>     ===  ===
>> Matcher   Template  Project  Result
>>     ===  ===
>> files 

[openstack-dev] Overriding project-templates in Zuul

2018-04-30 Thread James E. Blair
Hi,

If you've had difficulty overriding jobs in project-templates, please
read and provide feedback on this proposed change.

We tried to make the Zuul v3 configuration language as intuitive as
possible, and incorporated a lot that we learned from our years running
Zuul v2.  One thing that we didn't anticipate was how folks would end up
wanting to use a job in both project-templates *and* local project
stanzas.

Essentially, we had assumed that if you wanted to control how a job was
run, you would add it to a project stanza directly rather that use a
project-template.  It's easy to do so if you use one or the other.
However, it turns out there are lots of good reasons to use both.  For
example, in a project-template we may want to establish a recommended
way to run a job, or that a job should always be run with a set of
related jobs.  Yet a project may still want to indicate that job should
only run on certain changes in that specific repo.

To be very specific -- a very commonly expressed frustration is that a
project can't specify a "files" or "irrelevant-files" matcher to
override a job that appears in a project-template.

Reconciling those is difficult, largely because once Zuul decides to run
a job (for example, by a declaration in a project-template) it is
impossible to dissuade it from running that job by adding any extra
configuration to a project.  We need to tread carefully when fixing
this, because quite a number of related concepts could be affected.  For
instance, we need to preserve branch independence (a change to stop
running a job in one branch shouldn't affect others).  And we need to
preserve the ability for job variants to layer on to each other (a
project-local variant should still be able to alter a variant in a
project-template).

I propose that we remedy this by making a small change to how Zuul
determines that a job should run:

When a job appears multiple times on a project (for instance if it
appears in a project-template and also on the project itself), all of
the project-local variants which match the item's branch must also match
the item in order for the job to run.  In other words, if a job appears
in a project-template used by a project and on the project, then both
must match.

This effectively causes the "files" and "irrelevant-files" attributes on
all of the project-local job definitions matching a given branch to be
combined.  The combination of multiple files matchers behaves as a
union, and irrelevant-files matchers as an intersection.

    ===  ===
Matcher   Template  Project  Result
    ===  ===
files ABBC   ABC
irrelevant-files  ABBC   B
    ===  ===

I believe this will address the shortcoming identified above, but before
we get too far in implementing it, I'd like to ask folks to take a
moment and evaluate whether it will address the issues you've seen, or
if you foresee any problems which I haven't anticipated.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul memory improvements

2018-04-30 Thread James E. Blair
Hi,

We recently made some changes to Zuul which you may want to know about
if you interact with a large number of projects.

Previously, each change to Zuul which updated Zuul's configuration
(e.g., a change to a project's zuul.yaml file) would consume a
significant amount of memory.  If we had too many of these in the queue
at a time, the server would run out of RAM.  To mitigate this, we asked
folks who regularly submit large numbers of configuration changes to
only submit a few at a time.

We have updated Zuul so it now caches much more of its configuration,
and the cost in memory of an additional configuration change is very
small.  An added bonus: they are computed more quickly as well.

Of course, there's still a cost to every change pushed up to Gerrit --
each one uses test nodes, for instance, so if you need to make a large
number of changes, please do consider the impact to the whole system and
other users.  However, there's no longer a need to severely restrict
configuration changes as a class -- consider them as any other change.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-26 Thread James E. Blair
Clark Boylan  writes:

...

> I've since worked out a change that passes tempest using a global
> virtualenv installed devstack at
> https://review.openstack.org/#/c/558930/. This needs to be cleaned up
> so that we only check for and install the virtualenv(s) once and we
> need to handle mixed python2 and python3 environments better (so that
> you can run a python2 swift and python3 everything else).
>
> The other major issue we've run into is that nova file injection
> (which is tested by tempest) seems to require either libguestfs or
> nbd. libguestfs bindings for python aren't available on pypi and
> instead we get them from system packaging. This means if we want
> libguestfs support we have to enable system site packages when using
> virtualenvs. The alternative is to use nbd which apparently isn't
> preferred by nova and doesn't work under current devstack anyways.
>
> Why is this a problem? Well the new pip10 behavior that breaks
> devstack is pip10's refusable to remove distutils installed
> packages. Distro packages by and large are distutils packaged which
> means if you mix system packages and pip installed packages there is a
> good chance something will break (and it does break for current
> devstack). I'm not sure that using a virtualenv with system site
> packages enabled will sufficiently protect us from this case (but we
> should test it further). Also it feels wrong to enable system packages
> in a virtualenv if the entire point is avoiding system python
> packages.
>
> I'm not sure what the best option is here but if we can show that
> system site packages with virtualenvs is viable with pip10 and people
> want to move forward with devstack using a global virtualenv we can
> work to clean up this change and make it mergeable.

Now that pip 10 is here and we've got things relatively stable, it's
probably time to revisit this.

I think we should continue to explore the route that Clark has opened
up.  This isn't an emergency because all of the current devstack/pip10
conflicts have been resolved, however, there's no guarantee that
we won't add a new package with a conflict (which may be even more
difficult to resolve) or even that a future pip won't take an even
harder line.

I believe that installing all in one virtualenv has the advantage of
behaving more like what is expected of a project in the current python
ecosystem, while still retaining the co-installability testing that we
get with devstack.

What I'm a bit fuzzy on is how this impacts devstack plugins or related
applications.  However, it seems to me that we ought to be able to
essentially define the global venv as part of the API and then plugins
can participate in it.  Perhaps that won't be able to be automatic?
Maybe we'll need to set this up and then all devstack plugins will need
to change in order to use it?  If so, hopefully we'll be able to export
the functions needed to make that easy.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native

2018-04-19 Thread James E. Blair
Andrea Frittoli  writes:

> Dear all,
>
> a quick update on the current status.
>
> Zuul has been fixed to use the correct branch for roles coming from
> different repositories [1].
> The backport of the devstack patches to support multinode jobs is almost
> complete. All stable/queens patches are merged, stable/pike patches are
> almost all approved and going through the gate [2].
>
> The two facts above mean that now the "devstack-tempest" base job defined
> in Tempest can be switched to use the "orchestrate-devstack" role and thus
> function as a base for multinode jobs [3].
> It also means that work on writing grenade jobs in zuulv3 native format can
> now be resumed [4].
>
> Kind regards
>
> Andrea Frittoli
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129217.html
> [2]
> https://review.openstack.org/#/q/topic:multinode_zuulv3+(status:open+OR+status:merged
> )
> [3] https://review.openstack.org/#/c/545724/
> [4]
> https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3

Also, shortly after this update, we made a change to make it slightly
easier for folks with devstack plugin jobs.  You should no longer need
to set the LIBS_FROM_GIT variable manually; instead, just specify the
project in `required-projects`, and the devstack job will set it
automatically.

See https://review.openstack.org/548331 for an example.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Changes to Zuul role checkouts

2018-04-09 Thread James E. Blair
Hi,

We recently fixed a subtle but important bug related to how Zuul checks
out repositories it uses to find Ansible roles for jobs.

This may result in a behavior change, or even an error, for jobs which
use roles defined in projects with multiple branches.

Previously, Zuul would (with some exceptions) generally check out the
'master' branch of any repository which appeared in the 'roles:' stanza
in the job definition.  Now Zuul will follow its usual procedure of
trying to find the most appropriate branch to check out.  That means it
tries the project override-checkout branch first, then the job
override-checkout branch, then the branch of the change, and finally the
default branch of the project.

This should produce more predictable behavior which matches the
checkouts of all other projects involved in a job.

If you find that the wrong branch of a role is being checked out,
depending on circumstances, you may need to set a job or project
override-checkout value to force the correct one, or you may need to
backport a role to an older branch.

If you encounter any problems related to this, please chat with us in
#openstack-infra.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread James E. Blair
Sean McGinnis  writes:

> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> > 
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
>> > those using legacy devstack jobs (but you may want to be aware of it).
>> > It is more significant for new-style devstack jobs.
>> > 
>> > -snip-
>> > 
>> 
>> How does this apply to uses of devstack outside of zuul, such as in a
>> local development environment?
>> 
>> Doug
>> 
>
> This is my question too. I know in Cinder there are a lot of third party CI
> systems that do not use zuul. If they are impacted in any way by changes to
> devstack, we will need to make sure they are all aware of those changes (and
> have an alternative method for them to get the same functionality).

Neither local nor third-party CI use should be affected.  There's no
change in behavior based on current usage patterns.  Only the caveat
that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
non-existent package name), it will not automatically be caught.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread James E. Blair
Hi,

I've proposed a change to devstack which slightly alters the
LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
those using legacy devstack jobs (but you may want to be aware of it).
It is more significant for new-style devstack jobs.

The change is at https://review.openstack.org/549252

In summary, when this change lands, new-style devstack jobs should no
longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
should be unaffected (but there is a change to the verification process
performed by devstack).


Currently devstack expects the contents of LIBS_FROM_GIT to be
exclusively a list of python packages which, obviously, should be
installed from git and not pypi.  It is used for two purposes:
determining whether an individual package should be installed from git,
and verifying that a package was installed from git.

In the old devstack-gate system, we prepared many of the common git
repos, whether they were used or not.  So LIBS_FROM_GIT was created to
indicate that in some cases devstack should ignore those repos and
install from pypi instead.  In other words, its original purpose was
purely as a method of selecting whether a devstack-gate prepared repo
should be used or ignored.

In Zuul v3, we have a good way to indicate whether a job is going to use
a repo or not -- add it to "required-projects".  Considering that, the
LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
automatically generated based on the contents of required-projects.
This means that job authors don't need to list every required repository
twice.

However, a naĂŻve implementation of that runs afoul of the second use of
LIBS_FROM_GIT -- verifying that python packages are installed from git.

This usage was added later, after a typographical error ("-" vs "_" in a
python package name) in a constraints file caused us not to install a
package from git.  Now devstack verifies that every package in
LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
tempest, and other packages aren't installed.  So adding them
automatically to LIBS_FROM_GIT will cause devstack to fail.

My change modifies this verification to only check that packages
mentioned in LIBS_FROM_GIT that devstack tried to install were actually
installed.  I realize that stated as such this sounds tautological,
however, this check is still valid -- it would have caught the original
error that prompted the check in the first case.

What the revised check will no longer handle is a typo in a legacy job.
If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
However, I think the risk is worthwhile -- particularly since it is in
service of a system which eliminates the opportunity to introduce such
an error in the first place.

To see the result in action, take a look at this change which, in only a
few lines, implements what was a significantly more complex undertaking
in Zuul v2:

https://review.openstack.org/548331

Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
some reason, you require a new-style devstack job to manually set
LIBS_FROM_GIT, that will still work.  Simply define the variable as
normal, and the module which generates the devstack config will bypass
automatic generation if the variable is already set.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul project evolution

2018-03-15 Thread James E. Blair
Hi,

To date, Zuul has (perhaps rightly) often been seen as an
OpenStack-specific tool.  That's only natural since we created it
explicitly to solve problems we were having in scaling the testing of
OpenStack.  Nevertheless, it is useful far beyond OpenStack, and even
before v3, it has found adopters elsewhere.  Though as we talk to more
people about adopting it, it is becoming clear that the less experience
they have with OpenStack, the more likely they are to perceive that Zuul
isn't made for them.

At the same time, the OpenStack Foundation has identified a number of
strategic focus areas related to open infrastructure in which to invest.
CI/CD is one of these.  The OpenStack project infrastructure team, the
Zuul team, and the Foundation staff recently discussed these issues and
we feel that establishing Zuul as its own top-level project with the
support of the Foundation would benefit everyone.

It's too early in the process for me to say what all the implications
are, but here are some things I feel confident about:

* The folks supporting the Zuul running for OpenStack will continue to
  do so.  We love OpenStack and it's just way too fun running the
  world's most amazing public CI system to do anything else.

* Zuul will be independently promoted as a CI/CD tool.  We are
  establishing our own website and mailing lists to facilitate
  interacting with folks who aren't otherwise interested in OpenStack.
  You can expect to hear more about this over the coming months.

* We will remain just as open as we have been -- the "four opens" are
  intrinsic to what we do.

As a first step in this process, I have proposed a change[1] to remove
Zuul from the list of official OpenStack projects.  If you have any
questions, please don't hesitate to discuss them here, or privately
contact me or the Foundation staff.

-Jim

[1] https://review.openstack.org/552637

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul flaw in json logging

2018-03-14 Thread James E. Blair
Hi,

If your project is using secrets in Zuul v3, please see the attached
message to determine whether they may have been disclosed.

OpenStack's Zuul is now running with the referenced fix in place, and we
have verified that the secrets used in the project-config repo (eg, to
upload logs and artifacts) were not subject to disclosure.

-Jim

--- Begin Message ---
Dear zuul operators

Simon Westphahl discovered a flaw within json logging of zuul where no_log is 
ignored for ansible loops. Tasks within a loop may be able to print decrypted 
secrets in job-output.json, despite setting no_log.

This is fixed in https://review.openstack.org/552799 by Simon.

All operators are encouraged to take following actions:

* Update your zuul
* Check if any jobs dealing with secrets also deal with them in loops using 
no_log. If not, you're safe
* If yes, check job-output.json if secrets are contained
* If yes, change your secret

Sorry for any inconveniences

Tobias


--
BMW Car IT GmbH
Tobias Henkel
Spezialist Entwicklung
Moosacher Straße 86
80809 MĂŒnchen

Tel.:  ­+49 89 189311-48
Fax:  +49 89 189311-20
Mail: tobias.hen...@bmw.de
Web: http://www.bmw-carit.de
-
BMW Car IT GmbH
GeschĂ€ftsfĂŒhrer: Kai-Uwe Balszuweit
und Christian Salzmann
Sitz und Registergericht: MĂŒnchen HRB 134810
-


___
Zuul-announce mailing list
zuul-annou...@lists.zuul-ci.org
http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-announce
--- End Message ---
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-03-02 Thread James E. Blair
Bogdan Dobrelya  writes:

> Hello.
> As Zuul documentation [0] explains, the names "check", "gate", and
> "post"  may be altered for more advanced pipelines. Is it doable to
> introduce, for particular openstack projects, multiple check
> stages/steps as check-1, check-2 and so on? And is it possible to make
> the consequent steps reusing environments from the previous steps
> finished with?
>
> Narrowing down to tripleo CI scope, the problem I'd want we to solve
> with this "virtual RFE", and using such multi-staged check pipelines,
> is reducing (ideally, de-duplicating) some of the common steps for
> existing CI jobs.

What you're describing sounds more like a job graph within a pipeline.
See: 
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
for how to configure a job to run only after another job has completed.
There is also a facility to pass data between such jobs.

> For example, we may want to omit running all of the OVB or multinode
> (non-upgrade) jobs deploying overclouds, when the *undercloud* fails
> to install. This case makes even more sense, when undercloud is
> deployed from the same heat templates (aka Containerized Undercloud)
> and uses the same packages and containers images, as overcloud would
> do! Or, maybe, just stop the world, when tox failed at the step1 and
> do nothing more, as it makes very little sense to run anything else
> (IMO), if the patch can never be gated with a failed tox check
> anyway...
>
> What I propose here, is to think and discuss, and come up with an RFE,
> either for tripleo, or zuul, or both, of the following scenarios
> (examples are tripleo/RDO CI specific, though you can think of other
> use cases ofc!):
>
> case A. No deduplication, simple multi-staged check pipeline:
>
> * check-1: syntax only, lint/tox
> * check-2 : undercloud install with heat and containers
> * check-3 : undercloud install with heat and containers, build
> overcloud images (if not multinode job type), deploy
> overcloud... (repeats OVB jobs as is, basically)
>
> case B. Full de-duplication scenario (consequent steps re-use the
> previous steps results, building "on-top"):
>
> * check-1: syntax only, lint/tox
> * check-2 : undercloud unstall, reuses nothing from the step1 prolly
> * check-3 : build overcloud images, if not multinode job type, extends
> stage 2
> * check-4:  deploy overcloud, extends stages 2/3
> * check-5: upgrade undercloud, extends stage 2
> * check-6: upgrade overcloud, extends stage 4
> (looking into future...)
> * check-7: deploy openshift/k8s on openstack and do e2e/conformance et
> al, extends either stage 4 or 6
>
> I believe even the simplest 'case A' would reduce the zuul queues for
> tripleo CI dramatically. What do you think folks? See also PTG tripleo
> CI notes [1].
>
> [0] https://docs.openstack.org/infra/zuul/user/concepts.html
> [1] https://etherpad.openstack.org/p/tripleo-ptg-ci

Creating a job graph to have one job use the results of the previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter pep8
or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as possible,
rather than forcing an iterative workflow where they have to fix all the
whitespace issues before the CI system will tell them which actual tests
broke.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Some new Zuul features

2018-02-20 Thread James E. Blair
Hi,

We've rolled out a few new Zuul features you may find useful.

Added a post-timeout job attribute
==

We refined the way timeouts are handled.  The "timeout" attribute of a
job (which defaults to 30 minutes but can be changed by any job) now
covers the time used in the pre-run and run phases of the job.  There is
now a separate "post-timeout" attribute, which also defaults to 30
minutes, that covers the "post-run" phase of the job.

This means you can adjust the timeout setting for a long running job,
and maintain a lower post-timeout setting so that if the job encounters
a problem in the post-run phase, we aren't waiting 3 hours for it to
time out.

You generally shouldn't need to adjust this value, unless you have a job
which performs a long artifact upload in its post-run phase.

Docs: 
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.post-timeout

Added host and group vars
=

We added two new job attributes, "host-vars" and "group-vars" which
behave just like "vars" in that they define variables for use by
Ansible, but they apply to specific hosts or host groups respectively,
whereas "vars" applies to all hosts.

Docs: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host-vars

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native

2018-02-15 Thread James E. Blair
Andrea Frittoli  writes:

> Dear all,
>
> this is the first or a series of ~regular updates on the migration of
> Tempest / Grenade jobs to  Zuul v3 native.
>
> The QA team together with the infra team are working on providing the
> OpenStack community with a set of base Tempest / Grenade jobs that can be
> used as a basis to write new CI jobs / migrate existing legacy ones with a
> minimal effort and very little or no Ansible knowledge as a precondition.
>
> The effort is tracked in an etherpad [0]; I'm trying to keep the
> etherpad up to date but it may not always be a source of truth.

Thanks!

One other issue we noticed when using the new job is related to devstack
plugin ordering.  We're trying to design an interface to the job that
makes it easy to take the standard devstack and/or tempest job and add
in a plugin for a project.  This should greatly reduce the boilerplate
needed for new devstack jobs compared to Zuul v2.  However, our
interface for enabling plugins in Zuul is not ordered, but sometimes
ordering is important.

To address this, we've added a feature to devstack plugins which allow
them to express a dependency on other plugins.  Nothing else but Zuul
uses this right now, though we expand support for this in devstack in
the future.

If you maintain a devstack plugin which depends on another devstack
plugin, you can go ahead and indicate that with "plugin_requires" in the
settings file.  See [1] for more details.

We also need to land a change to the role that writes the devstack
config in order to use this new feature; it's ready for review in [2].

-Jim

[1] https://docs.openstack.org/devstack/latest/plugins.html#plugin-interface
[2] https://review.openstack.org/522054

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute

2018-02-14 Thread James E. Blair
Andrea Frittoli  writes:

>> That has no irrelevant-files matches, and so matches everything.  If you
>> drop the use of that template, it will work as expected.  Or, if you can
>> say with some certainty that nova's irrelevant-files set is not
>> over-broad, you could move the irrelevant-files from nova's invocation
>> into the template, or even the job, and drop nova's individual
>> invocation.
>>
> I don't think projects in the integrated gate should remove themselves
> from the
> template, it really helps keeping consistency.
>
> The pattern I've seen is that most projects repeat the same list of
> irrelevant files
> over and over again in all of their integration tests, It would be handy in
> future to
> be able to set irrelevant-files on a template when it's consumed.
> So we could have shared irrelevant files defined in the template, and
> custom ones
> added by each project when consuming the template. I don't this is is
> possible today.
> Does it sound like a reasonable feature request?

A template may specify many jobs, so if we added something like that
feature, what would the project-pipeline template application's
irrelevant files apply to?  All of the jobs in the template?  We could
do that.  But it only takes one exception for this approach to fall
short, and while a lot of irrelevant-files stanzas for a project are
similar, I don't think having exceptions will be unusual.  The only way
to handle exceptions like that is to specify them with jobs, and we're
back in the same situation.

Also, combining irrelevant-files is very difficult to think about.
Because it's two inverse matches, combining them ends up being the
intersection, not the union.  So if we implemented this, I don't think
we should have any irrelevant-files in the template, only on template
application.

I still tend to think that irrelevant-files are almost invariably
project-specific, so we should avoid using them in templates and job
definitions (unless absolutely certain they are universally applicable),
and we should only define them in one place -- in the project-pipeline
definition for individual jobs.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-05 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> The reason is that, contrary to earlier replies in this thread, the
> /#/c/ version of the change URL does not work.

The /#/c/ form of Gerrit URLs should work now; if it doesn't, please let
me know.

I would still recommend (and personally plan to use) the other form --
it's very easy to end up with a URL in Gerrit which includes the
patchset, or even a set of patchset diffs.  Zuul will ignore this
information and select the latest patchset of the change as its
dependency.  If a user clicks on a URL with an embedded patchset though,
they may end up looking at an old version, and not the version that Zuul
will use.

At any rate, the /#/c/ form should work.  I'd recommend trying to trim
off anything past the change number, if you do use it, to avoid
ambiguity.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-01 Thread James E. Blair
Zane Bitter  writes:

> Yeah, it's definitely nice to have that flexibility. e.g. here is a
> patch that wouldn't merge for 3 months because the thing it was
> dependent on also got proposed as a backport:
>
> https://review.openstack.org/#/c/514761/1
>
> From an OpenStack perspective, it would be nice if a Gerrit ID implied
> a change from the same Gerrit instance as the current repo and the
> same branch as the current patch if it exists (otherwise any branch),
> and we could optionally use a URL instead to select a particular
> change.

Yeah, that's reasonable, and it is similar to things Zuul does in other
areas, but I think one of the thing we want to do with Depends-On is
consider that Zuul isn't the only audience.  It's there just as much for
the reviewers, and other folks.  So when it comes to Gerrit change ids,
I feel we had to constrain it to Gerrit's own behavior.  When you click
on one of those in Gerrit, it shows you all of the changes across all of
the repos and branches with that change-id.  So that result list is what
Zuul should work with.  Otherwise there's a discontinuity between what a
user sees when they click the hyperlink under the change-id and what
Zuul does.

Similarly, in the new system, you click the URL and you see what Zuul is
going to use.

And that leads into the reason we want to drop the old syntax: to make
it seamless for a GitHub user to know how to Depends-On a Gerrit change,
and vice versa, with neither requiring domain-specific knowledge about
the system.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects)

2018-01-31 Thread James E. Blair
Hi,

Occasionally we will make changes to the Zuul configuration language.
Usually these changes will be backwards compatible, but whether they are
or not, we still want to move things forward.

Because Zuul's configuration is now spread across many repositories, it
may take many changes to do this.  I'm in the process of making one such
change now.

Zuul no longer requires the project name in the "project:" stanza for
in-repo configuration.  Removing it makes it easier to fork or rename a
project.

I am using a script to create and upload these changes.  Because changes
to Zuul's configuration use more resources, I, and the rest of the infra
team, are carefully monitoring this and pacing changes so as not to
overwhelm the system.  This is a limitation we'd like to address in the
future, but we have to live with now.

So if you see such a change to your project (the topic will be
"zuulv3-projects"), please observe the following:

* Go ahead and approve it as soon as possible.

* Don't be strict about backported change ids.  These changes are only
  to Zuul config files, the stable backport policy was not intended to
  apply to things like this.

* Don't create your own versions of these changes.  My script will
  eventually upload changes to all affected project-branches.  It's
  intentionally a slow process, and attempting to speed it up won't
  help.  But if there's something wrong with the change I propose, feel
  free to push an update to correct it.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-01-27 Thread James E. Blair
Eric Fried  writes:

> For my part, I tried it [1] and it doesn't seem to have worked.  (The
> functional test failure is what the dep is supposed to have fixed.)  Did
> I do something wrong?
>
> [1] https://review.openstack.org/#/c/533821/12

If you examine the "items:" section in this file:

  
http://logs.openstack.org/21/533821/12/check/openstack-tox-functional/9066bb2/zuul-info/inventory.yaml

You will see that Zuul collected the following changes to test together:

526541,19
533808,6
521098,29
521187,29
535463,3
536624,3
536625,4
537648,5
533821,12

All on the master branch of nova.  The change you specified,
"https://review.openstack.org/#/c/536545/; is not present.

The reason is that, contrary to earlier replies in this thread, the
/#/c/ version of the change URL does not work.

I'm sure we can fix that, but for the moment, we'll need to use the
permalink form.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute

2018-01-26 Thread James E. Blair
BalĂĄzs Gibizer  writes:

> Hi,
>
> I'm getting more and more confused how the zuul job hierarchy works or
> is supposed to work.

Hi!

First, you (or others) may or may not have seen this already -- some of
it didn't exist when we first rolled out v3, and some of it has changed
-- but here are the relevant bits of the documentation that should help
explain what's going on.  It helps to understand freezing:

  https://docs.openstack.org/infra/zuul/user/config.html#job

and matching:

  https://docs.openstack.org/infra/zuul/user/config.html#matchers

> First there was a bug in nova that some functional tests are not
> triggered although the job (re-)definition in the nova part of the
> project-config should not prevent it to run [1].
>
> There we figured out that irrelevant-files parameter of the jobs are
> not something that can be overriden during re-definition or through
> parent-child relationship. The base job openstack-tox-functional has
> an irrelevant-files attribute that lists '^doc/.*$' as a path to be
> ignored [2]. In the other hand the nova part of the project-config
> tries to make this ignore less broad by adding only '^doc/source/.*$'
> . This does not work as we expected and the job did not run on changes
> that only affected ./doc/notification_samples path. We are fixing it
> by defining our own functional job in nova tree [4].
>
> [1] https://bugs.launchpad.net/nova/+bug/1742962
> [2]
> https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380
> [3]
> https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509
> [4] https://review.openstack.org/#/c/533210/

This is correct.  The issue here is that the irrelevant-files definition
on openstack-tox-functional is too broad.  We need to be *extremely*
careful applying matchers to jobs like that.  Generally I think that
irrelevant-files should be reserved for the project-pipeline invocations
only.  That's how they were effectively used in Zuul v2, after all.

Essentially, when someone puts an irrelevant-files section on a job like
that, they are saying "this job will never apply to these files, ever."
That's clearly not correct in this case.

So our solutions are to acknowledge that it's over-broad, and reduce or
eliminate the list in [2] and expand it elsewhere (as in [3]).  Or we
can say "we were generally correct, but nova is extra special so it
needs its own job".  If that's the choice, then I think [4] is a fine
solution.

> Then I started looking into other jobs to see if we made similar
> mistakes. I found two other examples in the nova related jobs where
> redefining the irrelevant-files of a job caused problems. In these
> examples nova tried to ignore more paths during the override than what
> was originally ignored in the job definition but that did not work
> [5][6].
>
> [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full)

As noted in that bug, the tempest-full job is invoked on nova via this
stanza:

https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688

As expected, that did not match.  There is a second invocation of
tempest-full on nova here:

http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126

That has no irrelevant-files matches, and so matches everything.  If you
drop the use of that template, it will work as expected.  Or, if you can
say with some certainty that nova's irrelevant-files set is not
over-broad, you could move the irrelevant-files from nova's invocation
into the template, or even the job, and drop nova's individual
invocation.

> [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade)

The same template invokes this job as well.

> So far the problem seemed to be consistent (i.e. override does not
> work). But then I looked into neutron-grenade-multinode. That job is
> defined in neutron tree (like neutron-grenade) but nova also refers to
> it in nova section of the project-config with different
> irrelevant-files than their original definition. So I assumed that
> this will lead to similar problem than in case of neutron-grenade, but
> it doesn't.
>
> The neutron-grenade-multinode original definition [1] does not try to
> ignore the 'nova/tests' path but the nova side of the definition in
> the project config does try to ignore that path [8]. Interestingly a
> patch in nova that only changes under the path: nova/tests/ does not
> trigger the job [9]. So in this case overriding the irrelevant-files
> of a job works. (It seems that overriding neutron-tempest-linuxbridge
> irrelevant-files works too).
>
> [7]
> https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159
> [8]
> 

Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-01-26 Thread James E. Blair
Mathieu Gagné <mga...@calavera.ca> writes:

> On Thu, Jan 25, 2018 at 7:08 PM, James E. Blair <cor...@inaugust.com> wrote:
>> Mathieu Gagné <mga...@calavera.ca> writes:
>>
>>> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec <openst...@nemebean.com> wrote:
>>>>
>>>>
>>>> I'm curious what this means as far as best practices for inter-patch
>>>> references.  In the past my understanding was the the change id was
>>>> preferred, both because if gerrit changed its URL format the change id 
>>>> links
>>>> would be updated appropriately, and also because change ids can be looked 
>>>> up
>>>> offline in git commit messages.  Would that still be the case for 
>>>> everything
>>>> except depends-on now?
>>
>> Yes, that's a down-side of URLs.  I personally think it's fine to keep
>> using change-ids for anything other than Depends-On, though in many of
>> those cases the commit sha may work as well.
>>
>>> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This
>>> means you can more easily cherry-pick between branches without having
>>> to change the URL to match the new branch for your dependencies.
>>
>> Yes, there is a positive and negative aspect to this issue.
>>
>> On the one hand, for those times where it was convenient to say "depend
>> on this change in all its forms across all branches of all projects",
>> one must now add a URL for each.
>>
>> On the other hand, with URLs, it is now possible to indicate that a
>> change specifically depends on another change targeted to one branch, or
>> targeted to several branches.  Simply list each URL (or don't) as
>> appropriate.  That wasn't possible before -- it wall all or none.
>>
>> -Jim
>>
>
>> The old syntax will continue to work for a while
>
> I still believe Change-Id should be supported and not removed as
> suggested. The use of URL assumes you have access to Gerrit to fetch
> more information about the change.
> This might not always be true or possible, especially when Gerrit is
> kept private and only the git repository is replicated publicly and
> you which to cherry-pick something (and its dependencies) from it.

Perhaps a method of automatically noting the dependencies in git notes
could help with that case?  Or maybe use a different way of
communicating that information -- even with change-ids, there's still a
lot of missing information in that scenario (for instance, which changes
still haven't merged).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-01-25 Thread James E. Blair
Mathieu Gagné  writes:

> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec  wrote:
>>
>>
>> I'm curious what this means as far as best practices for inter-patch
>> references.  In the past my understanding was the the change id was
>> preferred, both because if gerrit changed its URL format the change id links
>> would be updated appropriately, and also because change ids can be looked up
>> offline in git commit messages.  Would that still be the case for everything
>> except depends-on now?

Yes, that's a down-side of URLs.  I personally think it's fine to keep
using change-ids for anything other than Depends-On, though in many of
those cases the commit sha may work as well.

> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This
> means you can more easily cherry-pick between branches without having
> to change the URL to match the new branch for your dependencies.

Yes, there is a positive and negative aspect to this issue.

On the one hand, for those times where it was convenient to say "depend
on this change in all its forms across all branches of all projects",
one must now add a URL for each.

On the other hand, with URLs, it is now possible to indicate that a
change specifically depends on another change targeted to one branch, or
targeted to several branches.  Simply list each URL (or don't) as
appropriate.  That wasn't possible before -- it wall all or none.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-01-24 Thread James E. Blair
Hi,

We recently introduced a new URL-based syntax for Depends-On: footers
in commit messages:

  Depends-On: https://review.openstack.org/535851

The old syntax will continue to work for a while, but please begin using
the new syntax on new changes.

Why are we changing this?  Zuul has grown the ability to interact with
multiple backend systems (Gerrit, GitHub, and plain Git so far), and we
have extended the cross-repo-dependency feature to support multiple
systems.  But Gerrit is the only one that uses the change-id syntax.
URLs, on the other hand, are universal.

That means you can write, as in https://review.openstack.org/535541, a
commit message such as:

  Depends-On: https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17

Or in a Github pull request like
https://github.com/ansible/ansible/pull/20974, you can write:

  Depends-On: https://review.openstack.org/536159

But we're getting a bit ahead of ourselves here -- we're just getting
started with Gerrit <-> GitHub dependencies and we haven't worked
everything out yet.  While you can Depends-On any GitHub URL, you can't
add any project to required-projects yet, and we need to establish a
process to actually report on GitHub projects.  But cool things are
coming.

We will continue to support the Gerrit-specific syntax for a while,
probably for several months at least, so you don't need to update the
commit messages of changes that have accumulated precious +2s.  But do
please start using the new syntax now, so that we can age the old syntax
out.

There are a few differences in using the new syntax:

* Rather than copying the change-id from a commit message, you'll need
  to get the URL from Gerrit.  That means the dependent change already
  needs to be uploaded.  In some complex situations, this may mean that
  you need to amend an existing commit message to add in the URL later.

  If you're uploading both changes, Gerrit will output the URL when you
  run git-review, and you can copy it from there.  If you are looking at
  an existing change in Gerrit, you can copy the URL from the permalink
  at the top left of the page.  Where it says "Change 535855 - Needs
  ..." the change number itself is the permalink of the change.

* The new syntax points to a specific change on a specific branch.  This
  means if you depend on a change to multiple branches, or changes to
  multiple projects, you need to list each URL.  The old syntax looks
  for all changes with that ID, and depends on all of them.  This may
  mean some changes need multiple Depends-On footers, however, it also
  means that we can express dependencies is a more fine-grained manner.

Please start using the new syntax, and let us know in #openstack-infra
if you have any problems.  As new features related to GitHub support
become available, we'll announce them here.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Merging feature/zuulv3 into master

2018-01-16 Thread James E. Blair
Hi,

On Thursday, January 18, 2018, we will merge the feature/zuulv3 branches
of both Zuul and Nodepool into master.

If you continuously deploy Zuul or Nodepool from master, you should make
sure you are prepared for this.

The current version of the single_node_ci pattern in puppet-openstackci
should, by default, install the latest released versions of Zuul and
Nodepool.  However, if you are running Zuul continuously deployed from a
version of puppet-openstackci which is not continuously deployed, or
using some other method, you may find that your system has automatically
been upgraded if you have not taken action before the branch is merged.

Regardless of how you deploy Zuul, if you find that your system has been
upgraded, simply re-install the most current releases of Zuul and
Nodepool, either from PyPI or from a git tag.  They are:

Nodepool: 0.5.0
Zuul: 2.6.0

Note that the final version of Zuul v3 has not been released yet.  We
hope to do so soon, but until we do, our recommendation is to continue
using the current releases.

Finally, if you find this message relevant, please subscribe to the new
zuul-annou...@lists.zuul-ci.org mailing list:

http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-announce

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread James E. Blair
Emmet Hikory  writes:

> Emilien Macchi wrote:
>
>> What we need to investigate:
>> - how do we deal milestones in stories and also how can we have a
>> dashboard with an overview per milestone (useful for PTL + TripleO
>> release managers).
>
>     While the storyboard API supports milestones, they don’t work very
> similarly to “milestones” in launchpad, so are probably confusing to
> adopt (and have no UI support).  Some folk use tags for this (perhaps
> with an automatic worklist that selects all the stories with the tag,
> for overview).

We're currently using tags like "zuulv3.0" and "zuulv3.1" to make this
automatic board:

https://storyboard.openstack.org/#!/board/53

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul dashboard available

2017-12-11 Thread James E. Blair
Hi,

I'd like to draw your attention to a recently added feature in Zuul.

If you visit http://zuulv3.openstack.org/ you will note three tabs at
the top of the screen (you may need to shift-reload the page): Status,
Jobs, Builds.  The "Jobs" page shows you a list of all of the jobs in
the system, along with their descriptions.  And the "Builds" page lists
the most recent runs.  You can query by pipeline, job, and project.

This may be especially helpful in tracking down builds (and their logs)
for periodic or other post-merge jobs.

We have a lot of plans to expand on and enhance these pages in the
future, however, for the next short while, we probably won't be making
very substantial changes to them as we prepare for the actual Zuul v3.0
release.

We hope in the mean time, the additional functionality they provide will
prove useful.

Thanks very much to Tristan Cacqueray and Joshua Hesketh whose work has
made this possible.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 49

2017-12-06 Thread James E. Blair
Chris Dent  writes:

> The expansion of the Foundation was talked about at the summit in
> Sydney, but having something happen this quickly was a bit of a
> surprise, leading to some [questions in
> IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-12-05.log.html#t2017-12-05T14:11:33)
> today. Jonathan Bryce showed up to help answer them.

I'd like to address a misconception in that IRC log:

2017-12-05T14:20:56   it does not take long to create a repo on our 
infrastructure
2017-12-05T14:21:14   though I guess without the name flattening, it 
would have been an "openstack" repository

While there's still some work to be done on flattening the namespace for
existing repos, I think it would be quite straightforward to create a
repository for a non-openstack project in gerrit with no prefix (or, of
course, a different prefix).  I don't think that would have been an
obstacle.

And regarding this:

2017-12-05T15:05:30   i'm not sure how much of infra's ci they could 
make use of given https://github.com/kata-containers/tests

I don't see an obstacle to using Zuul right now either -- even before
they have tests.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-11-22 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> "gong_ys2004" <gong_ys2...@aliyun.com> writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I
>> have:  devstack_plugins:
>> heat: https://git.openstack.org/openstack/heat
>> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
>> aodh: https://git.openstack.org/openstack/aodh
>> ceilometer: https://git.openstack.org/openstack/ceilometer
>> barbican: https://git.openstack.org/openstack/barbican
>> mistral: https://git.openstack.org/openstack/mistral
>> tacker: https://git.openstack.org/openstack/tacker
>> but the running order
>> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
>> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
>> I need barbican to start before tacker.
>
> [I changed the subject to replace the 'openstack' tag with 'devstack',
> which is what I assume was intended.]
>
>
> As Yatin Karel later notes, this is handled as a regular python
> dictionary which means we process the keys in an indeterminate order.
>
> I can think of a few ways we can address this:
>
...
> 3) Add dependency information to devstack plugins, but rather than
> having devstack resolve it, have the Ansible role which writes out the
> local.conf read that information and resolve the order.  This lets us
> keep the actual information in plugins so we don't have to continually
> update the role, but it lets us perform the processing in the role
> (which is in Python) when writing the config file.
...
> After considering all of those, I think I favor option 3, because we
> should be able to implement it without too much difficulty, it will
> improve things by providing a known and documented location for plugins
> to specify dependencies, and once it is in place, we can still implement
> option 1 later if we want, using the same declaration.

I discussed this with Dean and we agreed on something close to this
option, except that we would do it in such a way that devstack could
potentially make use of this in the future.  For starters, it will be
easy for devstack to error if someone adds plugins in the wrong order.
If someone feels like having a lot of fun, they could actually implement
a dependency resolver in devstack.

I have two patches which implement this idea:

https://review.openstack.org/521965
https://review.openstack.org/522054

Once those land, we'll need to add the appropriate lines to barbican and
tacker's devstack plugin settings files, then the job you're creating
should start those plugins in the right order automatically.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread James E. Blair
Monty Taylor  writes:

> * We use -W only if setup.cfg sets it
>
> * Installs dependencies via bindep for doc environment. Binary deps,
> such as graphviz, should be listed in bindep and marked with a 'doc'
> tag.
>
> * doc/requirements.txt is used for installation of python dependencies.
> Things like whereto or openstackdocstheme should go there.

Should we add this info to the infra manual?

Similar to this?

  https://docs.openstack.org/infra/manual/drivers.html#package-requirements

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-21 Thread James E. Blair
Jeremy Stanley  writes:

> On 2017-11-21 17:46:14 +0100 (+0100), Thomas Goirand wrote:
> [...]
>> Doing this kind of a patch at first on a few project's tox.ini,
>> absolutely! I might even start with Horizon and PBR (yes, there's a
>> problem there as well... which I haven't reported yet). Though
>> generalizing it to 300+ patches, I'm really not sure. Your thoughts?
>
> As Paul suggested we might be able to take advantage of the fact
> that we pull distro and Python packages from a mirror server which
> is identified in the build's Ansible variables, to disallow stateful
> egress except to that server but continue allowing stateful ingress
> from our control plane and whatever else gets access to the job
> nodes now.

If something like this is desirable, I think tox.ini may be the best
place for it, as it will cause local test runs to behave the same way as
in Zuul.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Planning for job execution outside the gate with Zuul v3

2017-11-20 Thread James E. Blair
David Moreau Simard  writes:

> The reason why I mention "outside the gate" is because one of the features
> of Zuul v3 is to dynamically construct its configuration by including
> different repositories.
> For example, the Zuul v3 from review.rdoproject.org can selectively include
> parts of git.openstack.org/openstack-infra/tripleo-ci [1] and it will load
> the configuration found there for jobs, nodesets, projects, etc.
>
> This opens a great deal of opportunities for sharing content or
> centralizing the different playbooks, roles and job parameters in one
> single repository rather than spread across different repositories across
> the production chain.
> If we do things right, this could give us the ability to run the same jobs
> (which can be customized with parameters depending on the environment,
> release, scenario, etc.) from the upstream gate down to
> review.rdoproject.org and the later productization steps.

Thanks for starting this thread!  I think it's a great idea.

I'd just like to mention here for folks who may not be aware --
re-usability of this kind is an explicit design goal of Zuul v3.  We're
hoping that the zuul-jobs repo in particular can be used by any Zuul in
the world to run the same job content.  In fact, the core review team
on that repo has already grown to include contributors from outside the
OpenStack ecosystem altogether.

The openstack-zuul-jobs and other individual repos may also provide job
content that's usable by folks operating Zuuls related to OpenStack
(e.g., third-party CI operators and distributors, as is the case here).

And finally, at the very least, if jobs themselves aren't re-usable, we
hope that the Ansible roles they use will be re-usable.  It is for this
reason that we have focused heavily on role development with simplified
playbooks in the common Zuul v3 jobs so far.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-07 Thread James E. Blair
Erik McCormick <emccorm...@cirrusseven.com> writes:

> On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair <cor...@inaugust.com> wrote:
>> Erik McCormick <emccorm...@cirrusseven.com> writes:
>>
>>> The concept, in general, is to create a new set of cores from these
>>> groups, and use 3rd party CI to validate patches. There are lots of
>>> details to be worked out yet, but our amazing UC (User Committee) will
>>> be begin working out the details.
>>
>> I regret that due to a conflict I was unable to attend this session.
>> Can you elaborate on why third-party CI would be necessary for this,
>> considering that upstream CI already exists on all active branches?
>
> Lack of infra resources, people are already maintaining their own
> testing for old releases, and distribution of work across
> organizations I think were the chief reasons. Someone else feel free
> to chime in and expand on it.

Which resources are lacking?  I wasn't made aware of a shortage of
upstream CI resources affecting stable branch work, but if there is, I'm
sure we can address it -- this is a very important effort.

The upstream CI system is also a collaboratively maintained system with
folks from many organizations participating in it.  Indeed we're now
distributing its maintenance and operation into projects themselves.
It seems like an ideal place for folks from different organizations to
collaborate.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-07 Thread James E. Blair
Erik McCormick  writes:

> The concept, in general, is to create a new set of cores from these
> groups, and use 3rd party CI to validate patches. There are lots of
> details to be worked out yet, but our amazing UC (User Committee) will
> be begin working out the details.

I regret that due to a conflict I was unable to attend this session.
Can you elaborate on why third-party CI would be necessary for this,
considering that upstream CI already exists on all active branches?

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all][stable] Zuul v3 changes and stable branches

2017-10-30 Thread James E. Blair
Boden Russell <boden...@gmail.com> writes:

> On 10/27/17 6:35 PM, James E. Blair wrote:
>> 
>> We're rolling out a new version of Zuul that corrects the issues, and
>> the migration doc has been updated.  The main things to know are:
>> 
>> * If your project has stable branches, we recommend backporting the Zuul
>>   config along with all the playbooks and roles that are in your repo to
>>   the stable branches.
>
> Does this apply to projects that don't have an in-repo config in master
> and only use shared artifacts?
>
> For example, our project's (master) pipeline is in project-config's
> projects.yaml and only uses shared templates/jobs/playbooks. Is the
> expectation that we copy this pipeline to an in-repo zuul.yaml for each
> stable branch as well as the "shared" playbooks?

No it doesn't apply -- if your project's Zuul config is entirely in
project-config, then this doesn't apply to you.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-10-30 Thread James E. Blair
"gong_ys2004"  writes:

> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework, 
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I have:  
> devstack_plugins:
> heat: https://git.openstack.org/openstack/heat
> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
> aodh: https://git.openstack.org/openstack/aodh
> ceilometer: https://git.openstack.org/openstack/ceilometer
> barbican: https://git.openstack.org/openstack/barbican
> mistral: https://git.openstack.org/openstack/mistral
> tacker: https://git.openstack.org/openstack/tacker
> but the running order 
> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
> I need barbican to start before tacker.

[I changed the subject to replace the 'openstack' tag with 'devstack',
which is what I assume was intended.]


As Yatin Karel later notes, this is handled as a regular python
dictionary which means we process the keys in an indeterminate order.

I can think of a few ways we can address this:

1) Add dependency information to devstack plugins so that devstack
itself is able to work out the correct order.  This is perhaps the ideal
solution from a user experience perspective, but perhaps the most
difficult.

2) Add dependency information to the Ansible role so that it resolves
the order on its own.  This is attractive because it solves a problem
that is unique to this Ansible role entirely within the role.  However,
it means that new plugins would need to also update this role which is
in devstack itself, which partially defeats the purpose of plugins.

3) Add dependency information to devstack plugins, but rather than
having devstack resolve it, have the Ansible role which writes out the
local.conf read that information and resolve the order.  This lets us
keep the actual information in plugins so we don't have to continually
update the role, but it lets us perform the processing in the role
(which is in Python) when writing the config file.

4) Alter Zuul's handling of this to an ordered dictionary.  Then when
you specify a series of plugins, they would be processed in that order.
However, I'm not sure this works very well with Zuul job inheritance.
Imagine that a parent job enabled the barbican plugin, and a child job
enabled ceilometer, needed ceilometer to start before barbican.  There
would be no way to express that.

5) Change the definition of the dictionary to encode ordering
information.  Currently the dictionary schema is simply the name of the
plugin as the key, and either the contents of the "enable_plugin" line,
or "null" if the plugin should be disabled.  We could alter it to be:

  devstack_plugins:
barbican:
  enabled: true
  url: https://git.openstack.org/openstack/barbican
  branch: testing
tacker:
  enabled: true
  url: https://git.openstack.org/openstack/tacker
  requires:
barbican: true

This option is very flexible, but makes using the jobs somewhat more
difficult because of the complexity of the data structure.

After considering all of those, I think I favor option 3, because we
should be able to implement it without too much difficulty, it will
improve things by providing a known and documented location for plugins
to specify dependencies, and once it is in place, we can still implement
option 1 later if we want, using the same declaration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all][stable] Zuul v3 changes and stable branches

2017-10-27 Thread James E. Blair
Hi,

I'd like to draw your attention to some things that we're changing in
Zuul v3 that affect stable branches.

We found a couple of interrelated issues where Zuul's behavior did not
match our expectations, and we also had some incorrect advice in the
migration doc.

We're rolling out a new version of Zuul that corrects the issues, and
the migration doc has been updated.  The main things to know are:

* If your project has stable branches, we recommend backporting the Zuul
  config along with all the playbooks and roles that are in your repo to
  the stable branches.

That's because:

* Generally speaking, jobs defined in a branch of your project should
  only apply to changes to that branch.  So the copy of a job defined in
  'master' should be used for changes to 'master'.  And the copy defined
  in 'stable/pike' should be used for changes to 'stable/pike'.

* Backporting this now is a bit of extra work that needs to happen as
  part of this initial transition.  But going forward, the workflow will
  be *much* simpler.  The next stable branch will begin its life with
  all the content from master already in place, and the two branches can
  simply naturally diverge as you would expect.

The Zuul v3 migration docs have been updated to reflect this:

  https://docs.openstack.org/infra/manual/zuulv3.html#stable-branches

If something about the new arrangement isn't working out for you, there
are other options.  Let us know and we can work through them.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gertty dashboards

2017-10-23 Thread James E. Blair
SƂawek KapƂoƄski  writes:

>>> "(NOT owner:self) status:open label:Code-Review-0,self
>>> label:Workflow=0 (project:openstack/neutron OR
>>> project:openstack/neutron-lib OR project:openstack/shade)
>>> branch:master”
>> 
>> If you haven't already, make sure you are subscribed to those three
>> projects in Gertty.  That will cause it to keep up with all of the
>> changes in those projects.  It also will also enable a per-project view
>> of open unreviewed changes for each project, very similar to your
>> dashboard.
>
> Yes, I am subscribed to all those projects and all is synced.
> I’m also using those „project views” there but I was thinking if it’s
> possible to have one view for all interesting projects for me :)

Oh, if you were already subscribed to those, then I don't know why
Gertty would be missing changes from that dashboard..  If you are able
to determine a commonality between the changes that are missing, that
may be useful.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gertty dashboards

2017-10-23 Thread James E. Blair
Jeremy Stanley  writes:

> It's software, so probably. That said, I expect Gertty would need
> some sort of first-class dashboard support where it knows (beyond
> simple keybindings for arbitrary queries, maybe similar to how it
> treats owner:self changes?) that you want all changes for dashboards
> pulled into the local DB and kept in sync so that it has them
> available for offline operation... not sure what impact that may
> have on performance either.
>
> And of course you'd need to convince its author this is a worthwhile
> behavior change, since there may be good reasons it was designed to
> work this way from the outset. I'll bring this thread to Jim's
> attention once he's around today; he will doubtless have more
> accurate details and concrete suggestions than I.

We could perhaps parse the dashboard queries for project names, and
either automatically subscribe to those projects, or silently add them
to the list of projects to sync.

Or we could emit a warning: "dashboard references unsubscribed project:
...".

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gertty dashboards

2017-10-23 Thread James E. Blair
SƂawek KapƂoƄski  writes:

> Hello,
>
> Recently I started using Getty and I think it’s good tool.
> I have one problem which I don’t know how to solve. On web based
> gerrit page (review.openstack.org) I have defined page with own query:
>
> "(NOT owner:self) status:open label:Code-Review-0,self
> label:Workflow=0 (project:openstack/neutron OR
> project:openstack/neutron-lib OR project:openstack/shade)
> branch:master”

If you haven't already, make sure you are subscribed to those three
projects in Gertty.  That will cause it to keep up with all of the
changes in those projects.  It also will also enable a per-project view
of open unreviewed changes for each project, very similar to your
dashboard.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] Decommissioning Zuul v2

2017-10-18 Thread James E. Blair
Hi,

At the infra meeting[1] yesterday, there was general agreement that we have
likely passed the point at which we would consider a rollback to Zuul
v2, and will iterate forward on any further issues (as we have been
doing since Sunday).

We plan on freezing the Zuul v2 config[2] now, and keep it and the Zuul
v2 servers around until next week, at which point we will tag
project-config (for easy historical reference), start deleting the Zuul
v2 content, and delete the servers.

If this sounds premature, please let us know.

Thanks,

Jim

[1] 
http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-10-17-19.00.html
[2] zuul/layout.yaml and jenkins/ in the project-config repository

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] Stable branch jobs in Zuul v3

2017-10-16 Thread James E. Blair
Hi,

If you have started moving Zuul configuration into your project repos,
please note the following:

  You will probably need to backport at least the "project" stanza to
  stable branches.

Zuul's configuration is global; that includes configuration loaded from
all branches of a project.  So you don't need to copy job definitions
from master to stable (but you can -- if you do, those become branch
variants and can be used to alter the behavior of that job on the stable
branch).

And when projects are defined in special repos we call "config
projects", such as the innovatively named "project-config", the jobs
added to those project-pipelines run on all branches (unless otherwise
specified).  That's why when we put the automatically converted project
definitions in project-config, those jobs generally run on all the
branches.

However, when a project definition appears in-repo, it is generally
assumed that those jobs should only run on that branch.  So the project
definition in master indicates which jobs should run on changes to
master, and the definition in stable/ocata says which jobs run on
changes to ocata.

This means a little more work up-front as you move project definitions
in-repo, but in the long run, it should be a very intuitive workflow.
Imagine when we branch stable/queens: the project definition that
currently appears in master will have a copy in stable/queens.  At that
point, further changes to the jobs which run on master will no longer
affect what jobs run on queens changes.  That's the workflow the system
is designed to make easy.

We have updated the migration documentation in the infra-manual to
mention this:

https://docs.openstack.org/infra/manual/zuulv3.html#stable-branches

Please let me know if you have any questions.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Zuul v3 rollout, the sequel

2017-10-10 Thread James E. Blair
Gary Kotton  writes:

> Hi,
> At the moment the neutron, neutron-lib and many of the decomposed projects 
> are still failing with the v3. Does this mean that we are broken from the 
> 11th?
> For the decomposed projects we have a work around to help address this in the 
> short term – need to increase timeout and need a flag from Zuul3 that is not 
> part of Jenkins - 
> https://github.com/openstack/vmware-nsx/blob/master/tools/tox_install_project.sh#L37
>  (can we have ZUUL3_CLONER?)
> Thanks
> Gary

In that script $ZUUL_CLONER points to /usr/zuul-env/bin/zuul-cloner
which will exist for auto-converted legacy jobs in v3.  If such a job is
not working, let's take a look at a log and dig into it.

That will *not* be present for non-legacy jobs.  I think once the dust
settles, we can work on using some much nicer facilities that v3
provides for doing that sort of thing in new native v3 jobs.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Zuul v3 migration update

2017-09-25 Thread James E. Blair
Hi,

We started to migrate to Zuul v3 today, but ran into some problems which
delayed us sufficiently that we're going to suspend activity until
tomorrow.

In the mean time, please consider the project-config repository frozen
to all changes except those which are necessary for the migration.  We
should be able to lift this freeze as soon as we finish the migration.

If you haven't yet, please see [1] for information about the transition.

[1] https://docs.openstack.org/infra/manual/zuulv3.html

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][infra] Release of openstack-infra/zuul-sphinx failed

2017-09-25 Thread James E. Blair
Jeremy Stanley  writes:

> On 2017-09-25 10:51:42 -0400 (-0400), Doug Hellmann wrote:
>> I assume someone has already noticed that this release failed, but I'm
>> forwarding it to the dev list just in case.
>
> Thanks for the heads up! And yes, it's currently under discussion in
> #openstack-infra but we're still trying to nail down the root cause.

We recently removed several Zuul mergers to reallocate them to Zuul v3
in preparation for the cutover.  Unfortunately, the persistent
workspaces on the release worker have git repos which may have
originally been cloned from those mergers, and so when our release jobs
update those repos, they may consult those mergers as they are the
'origin' remotes.

This is increasingly likely to affect projects created more recently, as
older projects would have been cloned when we had fewer mergers, and the
mergers we removed were the ones added last.

This is something that, ideally, we would have addressed either in
zuul-cloner or in the release jobs.  However, we're about to replace
them all, so it's probably not worth worrying about.

In the mean time, I removed all of the workspaces on the release worker,
so all projects will clone anew from the existing set of mergers.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Gertty 1.5.0

2017-07-30 Thread James E. Blair
Announcing Gertty 1.5.0
===

Gertty is a console-based interface to the Gerrit Code Review system.

Gertty is designed to support a workflow similar to reading network
news or mail.  It syncs information from Gerrit to local storage to
support disconnected operation and easy manipulation of local git
repos.  It is fast and efficient at dealing with large numbers of
changes and projects.

The full README may be found here:

  https://git.openstack.org/cgit/openstack/gertty/tree/README.rst

Changes since 1.4.0:


* Added support for sorting dashboards and change lists by multiple
  columns

* Added a Unicode graphic indication of the size of changes in the
  change list

* Added the number of changes to the status bar in the change list

* Added a trailing whitespace indication (which can be customized or
  ignored in a custom palette)

* Several bug fixes related to:
  * Negative topic search results
  * Crashes on loading changes with long review messages
  * Avoiding spurious sync failures on conflict queries
  * Errors after encounting a deleted project
  * Better detection of some offline errors
  * Fetching missing refs
  * Gerrit projects created since Gertty started
  * Re-syncing individual changes after a sync failure

Thanks to the following people whose changes are included in this
release:

  Jim Rollenhagen
  Kevin Benton
  Masayuki Igawa
  Matthew Thode

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-07-11 Thread James E. Blair
Sean Dague  writes:

> On 07/05/2017 03:23 PM, Emilien Macchi wrote:
> 
>> 
>> I also believe that some of the scripts could be transformed into
>> native features of Storyboard where bugs could be auto-triaged
>> periodically without human intervention.
>> Maybe it would convince more OpenStack projects to leave Launchpad and
>> adopt Storyboard?
>> I would certainly one of those and propose such a change for TripleO &
>> related projects.
>
> Maybe... my concern there is that workflow encoded into trackers is
> pretty static, and it's hard to evolve, because it impacts all users of
> that platform. Where as a script that processes bugs externally can
> adapt really quickly based on what's working / not working with a
> particular team. There is no 1 right way to handle bugs, it's just about
> making every bug handling team the most effective that they can be.
> Which means I assume that different teams would find different parts of
> this useful, and other parts things they wouldn't want to use at all.
> That's why I tried to make every "processing unit" it's own cli.
>
> Ideally storyboard would just be a lot more receptive to these kinds of
> things, by emitting a more native event stream, and having really good
> tag support (preferably actually project scoped tags, so setting it on
> the nova task doesn't impact the neutron tasks on the same story, as an
> for instance) so the hack we need to do on LP isn't needed. But,
> actually, beyond that, keeping the processing logic team specific is a
> good thing. It's much like the fact that we've largely done gerrit
> review dashboards client side, because they are fast to iterate on, then
> server side.

I agree.  I think being able to add things to Storyboard is great, and
as we've been using it more, we've done some of that.  But we've also
run into places where we found that we needed Storyboard to do some
things that were ultimately project-specific workflows.  So I think long
term we're going to have both things -- adding features that make sense
globally as well as ones that facilitate local configuration and
workflows.

As an example, the "board" feature on storyboard can be really useful,
but we wanted to automate some of the movement between lanes.  Lanes are
arbitrary.  Rather than writing a new processing language to describe
that and incorporating that into Storyboard, we wrote a script to manage
one specific board using the Storyboard API.

The board is here: https://storyboard.openstack.org/#!/board/41

The script is here: 
http://git.openstack.org/cgit/openstack-infra/zuul/tree/tools/update-storyboard.py?h=feature/zuulv3

(Basically, that script automatically moves tasks between lanes based on
status according to the map defined on line 65, while still allowing
folks to manually move tasks between certain classes of lanes -- so a
task marked as 'todo' can be in either the 'New', 'Backlog', or 'Todo'
lanes.)

I'm imagining a future where we have lots of scripts like that (or maybe
a few framework scripts like Sean's, with configuration), and we run
those scripts in Infra but projects are responsible for their own
configuration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-28 Thread James E. Blair
Thierry Carrez  writes:

> Removing the root cause would be a more radical move: stop offering
> hosting to non-OpenStack projects on OpenStack infrastructure
> altogether. We originally did that for a reason, though. The benefits of
> offering that service are:
>
> 1- it lets us set up code repositories and testing infrastructure before
> a project applies to be an official OpenStack project.
>
> 2- it lets us host things that are not openstack but which we work on
> (like abandoned Python libraries or GPL-licensed things) in a familiar
> environment
>
> 3- it spreads "the openstack way" (Gerrit, Zuul) beyond openstack itself

I think this omits what I consider the underlying reason for why we did
it:

It helps us build a community around OpenStack.

Early on we had so many people telling us that we needed to support
"ecosystem" projects better.  That was the word they used at the time.
Many of us said "hey, you're free to use github" and they told us that
wasn't enough.

We eventually got the message and invited them in, and it surpassed our
expectations and I think surprised even the most optimistic of us.  We
ended up in a place where anyone with an OpenStack related idea can try
it out and collaborate frictionlessly with everyone else in the
OpenStack community on it, and in doing so, become recognized in the
community for that.  The ability for someone to build something on top
of OpenStack as part of the OpenStack community has been empowering.

I confess to being a skeptic and a convert.  I wasn't thrilled about the
unbounded additional responsibility when we started this, but now that
we're here, I think it's one of the best things about the project and I
would hate to cleave our community by ending it.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Status of Zuul v3

2017-06-14 Thread James E. Blair
Greetings!

This periodic update is primarily intended as a way to keep
contributors to the OpenStack community apprised of Zuul v3 project
status, including future changes and milestones on our way to use in
production. Additionally, the numerous existing and future users of
Zuul outside of the OpenStack community may find this update useful as
a way to track Zuul v3 development status.

If "changes are coming in the land of Zuul" is new news to you, please
read the section "About Zuul and Zuul v3" towards the end of this
email.

== Zuul v3 project status and updates ==

The biggest recent development is that basic support for GitHub has
merged!  Thanks to Jan, Tobias, Jonathan, Jamie, Jesse, and everyone
else that helped with that years-long effort!  We're still working to
achieve feature parity (notably, cross-repo dependency support hasn't
been implemented yet), but basic operations work and we have a good base
to start from.

We've also landed support for bubblewrap, so that untrusted job content
can run in a restricted environment.  This is a big improvement for
executor security.  Thanks to Clint and others who helped with this!

We merged support for live-streaming interleaved ansible logs and
console logs from all of the hosts in a job.  The streaming protocol is
compatible with finger, so you can easily request the log for a job by
running "finger UUID@executor".  That's handy for using unix tools to
deal with the output (think grep, sed, awk, etc).  To make this
accessible over the web, we are working on a websocket based console
streamer, which uses the finger-compatible endpoints on the backend.
When we're done, we'll have a nice web frontend for easily viewing
console logs linked to from the status page, and finger URLs for users
who want to view or process their logs from a unix shell.  Thanks to
David and Monty for work on this!

We've created some new repositories to hold Zuul jobs and the Ansible
roles that they use.  We're going to try something new here -- we want
to create a standard library of jobs that any Zuul installation (not
just those related to OpenStack) can use.  Flexibility and local
customization of jobs is very important in Zuul v3, but with job
inheritance and Ansible roles, we have two very useful methods of
composition that we can use to share job content so that not everyone
has to reinvent the wheel.  These are the repos we've created and how we
expect to use them:

  openstack-infra/zuul-jobs

This is what we're calling the "standard library".  We're going to
put any jobs which we think are not inherently OpenStack-specific.
For example, jobs to run python unit tests, java builds, go tests,
autoconf/makefile based projects, etc.

  openstack-infra/openstack-zuul-jobs

This is where we will put OpenStack-specific jobs (or
OpenStack-specific variants of standard library jobs).

In the near term, we're going to start populating these repos with what
we need for OpenStack's Zuul, and will probably move things around quite
a bit as we figure out where they should go.  We are also working on a
Sphinx extension (in the openstack-infra/zuul-sphinx repo) to
automatically document all of the jobs and roles in these repos.  We
should have self-documenting jobs with published documentation right
from the start.  Thanks to Paul for his help on this!

Also thanks to Paul for setting up OpenStack's production instance of
Zuul v3 at zuulv3.openstack.org server and our first executor at
ze01.openstack.org.  That's running now, and we're currently working
through some things that we deferred from setting up our dev instance,
notably log publishing.

With the approval of the nodepool drivers spec:

  
http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-drivers.html

Tristan has started work on an implementation supporting multiple
backend drivers for nodepool.  This will initially include a driver for
static nodes, and later we will use this to support multiple cloud
technologies:

  http://lists.openstack.org/pipermail/openstack-infra/2017-June/005387.html

Tristan has also proposed a proof-of-concept implementation of a
dashboard for Zuul, which has prompted a conversation about web
frameworks:

  http://lists.openstack.org/pipermail/openstack-infra/2017-June/005402.html

We're working to come to consensus on that so that we can ultimately
converge our webhooks, status page, websocket console streaming, and
dashboard onto one framework.

Upcoming tasks and focus:
* Re-enabling disabled tests: We're continuing to make our way through
the list of remaining tests that need enabling. See the list, which
includes an annotation as to complexity for each test, here:
https://etherpad.openstack.org/p/zuulv3skips
* Github parity
* Log streaming
* Standard jobs
* Set up production zuulv3.openstack.org server
* Full task list and plan is in the Zuul v3 storyboard:
https://storyboard.openstack.org/#!/board/41

Recent changes:
* Zuul v3:

Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread James E. Blair
Darragh Bailey <daragh.bai...@gmail.com> writes:

> On 22 March 2017 at 15:02, James E. Blair <cor...@inaugust.com> wrote:
>
>> Ian Cordasco <sigmaviru...@gmail.com> writes:
>>
>> >
>> > I suppose Barbican doesn't meet those requirements either, then, yes?
>>
>> Right -- we don't want to require another service or tie Zuul to an
>> authn/authz system for a fundamental feature.  However, I do think we
>> can look at making integration with Barbican and similar systems an
>> option for folks who have such an installation and prefer to use it.
>>
>> -Jim
>>
>
> Sounds like you're going to make this plugable, is that a hard requirement
> that will be added to the spec? or just a possibility?

More of a possibility at this point.  In general, I'd like to off-load
interaction with other systems to Ansible as much as possible, and then
add minimal backing support in Zuul itself if needed, that way the core
of Zuul doesn't become a choke point.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread James E. Blair
Ian Cordasco <sigmaviru...@gmail.com> writes:

> On Tue, Mar 21, 2017 at 6:10 PM, James E. Blair <cor...@inaugust.com> wrote:
>> We did talk about some other options, though unfortunately it doesn't
>> look like a lot of that made it into the spec reviews.  Among them, it's
>> probably worth noting that there's nothing preventing a Zuul deployment
>> from relying on some third-party secret system -- if you can use it with
>> Ansible, you should be able to use it with Zuul.  But we also want Zuul
>> to have these features out of the box, and, wearing our sysadmin hits,
>> we're really keen on having source control and code review for the
>> system secrets for the OpenStack project.
>>
>> Vault alone doesn't meet our requirements here because it relies on
>> symmetric encryption, which means we need users to share a key with
>> Zuul, implying an extra service with out-of-band authn/authz.  However,
>> we *could* use our PKCS#1 style system to share a vault key with Zuul.
>> I don't think that has come up as a suggestion yet, but seems like it
>> would work.
>
> I suppose Barbican doesn't meet those requirements either, then, yes?

Right -- we don't want to require another service or tie Zuul to an
authn/authz system for a fundamental feature.  However, I do think we
can look at making integration with Barbican and similar systems an
option for folks who have such an installation and prefer to use it.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
David Moreau Simard  writes:

> I don't have a horse in this race or a strong opinion on the topic, in
> fact I'm admittedly not very knowledgeable when it comes to low-level
> encryption things.
>
> However, I did have a question, even if just to generate discussion.
> Did we ever consider simply leaving secrets out of Zuul and offloading
> that "burden" to something else ?
>
> For example, end-users could use something like git-crypt [1] to crypt
> files in their git repos and Zuul could have a mean to decrypt them at
> runtime.
> There is also ansible-vault [2] that could perhaps be leveraged.
>
> Just trying to make sure we're not re-inventing any wheels,
> implementing crypto is usually not straightfoward.

We did talk about some other options, though unfortunately it doesn't
look like a lot of that made it into the spec reviews.  Among them, it's
probably worth noting that there's nothing preventing a Zuul deployment
from relying on some third-party secret system -- if you can use it with
Ansible, you should be able to use it with Zuul.  But we also want Zuul
to have these features out of the box, and, wearing our sysadmin hits,
we're really keen on having source control and code review for the
system secrets for the OpenStack project.

Vault alone doesn't meet our requirements here because it relies on
symmetric encryption, which means we need users to share a key with
Zuul, implying an extra service with out-of-band authn/authz.  However,
we *could* use our PKCS#1 style system to share a vault key with Zuul.
I don't think that has come up as a suggestion yet, but seems like it
would work.

Git-crypt in GPG mode, at first glance, looks like it could work fairly
well for this.  It encrypts entire files, so we would have to rework how
secrets are stored (we encrypt blobs within plaintext files) and add
another file to the list of zuul config files (e.g., .zuul.yaml.gpg).
But aside from that, I think it could work and may be worth further
exploration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
Clint Byrum  writes:

> Excerpts from Matthieu Huin's message of 2017-03-21 18:43:49 +0100:
>> Hello James,
>> 
>> Thanks for opening the discussion on this topic. I'd like to mention that a
>> very common type of secrets that are used in Continuous Deployments
>> scenarios are SSH keys. Correct me if I am wrong, but PKCS#1 wouldn't
>> qualify if standard keys were to be stored.
>
> You could store a key, just not a 4096 bit key.
>
> PKCS#1 has a header/padding of something like 12 bytes, and then you
> need a hash in there, so for SHA1 that's 160 bits or 20 bytes, SHA256
> is 256 bites so 32 bytes. So with a 4096 bit (512 bytes) Zuul key, you
> can encrypt 480 bytes of plaintext, or 468 with sha256. That's enough
> for a 3072 bit (384 bytes) SSH key. An uncommon size, but RSA says'
> they're good past 2030:
>
> https://www.emc.com/emc-plus/rsa-labs/historical/twirl-and-rsa-key-size.htm
>
> It's a little cramped, but hey, this is the age of tiny houses, maybe we
> should make do with what we have.

There is that option, the option of adding another encryption system
capable of storing larger keys, or this third option:

Because we wanted continuous deployment to be a first-class feature in
Zuul v3, we added this section of the spec which specifies that Zuul
should have a number of keys automatically available for use in a CD
system:

  
http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#continuous-deployment

We haven't started implementing that yet, and it probably needs a little
bit of updating before we do, but I think the fundamental idea is still
sound and could be accomplished.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
Hi,

In working on the implementation of the encrypted secrets feature of
Zuul v3, I have found some things that warrant further discussion.  It's
important to be deliberate about this and I welcome any feedback.

For reference, here is the relevant portion of the Zuul v3 spec:

http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#secrets

And here is an implementation of that:

https://review.openstack.org/#/q/status:open+topic:secrets+project:openstack-infra/zuul

The short version is that we want to allow users to store private keys
in the public git repos which Zuul uses to run jobs.  To do this, we
propose to use asymmetric cryptography (RSA) to encrypt the data.  The
specification suggests implementing PKCS#1-OAEP, a standard for
implementing RSA encryption.

Note that RSA is not able to encrypt a message longer than the key, and
PKCS#1 includes some overhead which eats into that.  If we use 4096 bit
RSA keys in Zuul, we will be able to encrypt 3760 bits (or 470 bytes) of
information.

Further, note that value only holds if we use SHA-1.  It has been
suggested that we may want to consider using SHA-256 with PKCS#1.  If we
do, we will be able to encrypt slightly less data.  However, I'm not
sure that the Python cryptography library allows this (yet?).  Also, see
this answer for why it may not be necessary to use SHA-256 (and also,
why we may want to anyway):

https://security.stackexchange.com/questions/112029/should-sha-1-be-used-with-rsa-oaep

One thing to note is that the OpenSSL CLI utility uses SHA-1.  Right
now, I have a utility script which uses that to encrypt secrets so that
it's easy for anyone to encrypt a secret without installing many
dependencies.  Switching to another hash function would probably mean we
wouldn't be able to use that anymore.  But that's also true for other
systems (see below).

In short, PKCS#1 pros: Simple, nicely packaged asymmetric encryption,
hides plaintext message length (up to its limit).  Cons: limited to 470
bytes (or less).

Generally, when faced with the prospect of encrypting longer messages,
the advice is to adopt a hybrid encryption scheme (as opposed to, say,
chaining RSA messages together, or increasing the RSA key size) which
uses symmetric encryption with a single-use key for the message and
asymmetric encryption to hide the key.  If we want Zuul to support the
encryption of longer secrets, we may want to adopt the hybrid approach.
A frequent hybrid approach is to encrypt the message with AES, and then
encrypt the AES key with RSA.

The hiera-eyaml work which originally inspired some of this is based on
PKCS#7 with AES as the cipher -- ultimately a hybrid approach.  An
interesting aspect of that implementation is that the use of PKCS#7 as a
message passing format allows for multiple possible underlying ciphers
since the message is wrapped in ASN.1 and is self-descriptive.  We might
have simply chosen to go with that except that there don't seem to be
many good options for implementing this in Python, largely because of
the nightmare that is ASN.1 parsing.

The system we have devised for including encrypted content in our YAML
files involves a YAML tag which specifies the encryption scheme.  So we
can evolve our use to add or remove systems as needed in the future.

So to break this down into a series of actionable questions:

1) Do we want a system to support encrypting longer secrets?  Our PKCS#1
system supports up to 470 bytes.  That should be sufficient for most
passwords and API keys, but unlikely to be sufficient for some
certificate related systems, etc.

2) If so, what system should we use?

   2.1a) GPG?  This has hybrid encryption and transport combined.
   Implementation is likely to be a bit awkward, probably involving
   popen to external processes.

   2.1b) RSA+AES?  This recommendation from the pycryptodome
   documentation illustrates a typical hybrid approach:
   
https://pycryptodome.readthedocs.io/en/latest/src/examples.html#encrypt-data-with-rsa
   The transport protocol would likely just be the concatenation of
   the RSA and AES encrypted data, as it is in that example.  We can
   port that example to use the python-cryptography primatives, or we
   can switch to pycryptodome and use it exactly.

   2.1c) RSA+Fernet?  We can stay closer to the friendly recipes in
   python-cryptography.  While there is no complete hybrid recipe,
   there is a symmetric recipe for "Fernet" which is essentially a
   recipe for AES encryption and transport.  We could encode the
   Fernet key with RSA and concatenate the Fernet token.
   https://github.com/fernet/spec/blob/master/Spec.md

   2.1d) NaCL?  A "sealed box" in libsodium (which underlies PyNaCL)
   would do what we want with a completely different set of
   algorithms.
   https://github.com/pyca/pynacl/issues/189

3) Do we think it is important to hide the length of the secret?  AES
will expose the approximate length of the secret up to the block size
(16 bytes).  This 

Re: [openstack-dev] [Sahara][infra] Jenkins based 3rd party CIs

2017-03-06 Thread James E. Blair
Telles Nobrega  writes:

> Hello,
>
> we from Sahara use the compatibility layer with Zuulv2.5 and we are
> wondering if with the change to Zuulv3 this compatibility layer will still
> be maintained.
> If the layer is removed it will reflect into some changes on our side and
> we are looking for this information to identify how much work will be
> needed on our CI.

Hi,

If you are referring to the ability to run jobs in Jenkins, no, Zuul
will no longer have direct support for that.

If you are asking about using the zuul-launcher in Zuul v2.5 (which
reads job configurations in jenkins-job-builder and uses Ansible to run
jobs in the same way that Jenkins would), we won't have direct support
for that either (this is part of the reason we are not encouraging
people to use that).

However, in *either* case, we do expect to have some kind of automated
process to convert many jobs written in JJB.  Much of the code that we
wrote for Zuul v2.5 to translate JJB into Ansible should be able to be
reused for that purpose.  The output may not be optimal -- it will
likely be a series of bulky Ansible shell tasks, but we hope it will
accomplish much of the work in an automated fashion so that an operator
will be able to improve the result over time.

You may wish to keep an eye out for periodic updates to our progress
that Robyn Bergeron is planning to send out, such as this one:

  http://lists.openstack.org/pipermail/openstack-dev/2017-March/113148.html

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] where the mail-man lives?

2017-03-03 Thread James E. Blair
"bogda...@mail.ru"  writes:

> The mail-man for the openstack-dev mail list is missing at least 'kolla'
> and 'development' checkboxes. It would be nice to make its filter case
> unsensitive as well, so it would match both 'All' and 'all' tags. How
> could we fix that? Any place to submit a PR?

Unfortunately that has to be changed through the mailman web interface.
I've CC'd the folks listed as contacts for the mailing list.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-03 Thread James E. Blair
"bogda...@mail.ru"  writes:

> That's great news! In-repo configs will speed up development for teams,
> with a security caveat for infrastructure team to keep in mind. The
> ansible runner CI node which runs playbooks for defined jobs, should not
> content sensitive information, like keys and secrets in files or
> exported env vars, unless they are a one time or limited in time. The
> same applies to the nodepool nodes allocated for a particular CI test
> run. Otherwise, a malformed patch could make ansible to cat/echo all of
> the secrets to the publicly available build logs.

Indeed that is a risk.  To mitigate that, we are building a restricted
execution environment for Ansible so that jobs defined in-repo will only
be allowed to access a per-job staging area on the runner.  And we also
plan on running that in a chrooted container.

These protections are not complete yet, which is why our test instance
at the moment is very limited in scope.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
Mikhail Medvedev  writes:

>
> In theory it is possible split diff and syntax spatially, so there
> would be no need to mix diff and syntax colors. Mockup
> http://i.imgur.com/gAD9x9v.png

True, though I should have clarified my comments as applying
particularly to the intra-line diff, where not only are changed lines
indicated (by dark red/green) but also changed characters (by bright
red/green).  As someone who could spend an hour staring at a line and
not seeing the addition of a single letter, I find that very useful.  :)

Perhaps in your approach some compromise could be obtained by indicating
a changed line as you suggest, and which characters are changed via an
alteration to either the foreground (perhaps making them bold) or
background color of the characters.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
"Sean M. Collins"  writes:

> For some reason I installed the newer version but still the version
> string reports
>
> Gertty version: 1.1.1.dev24

When I install it from pypi via pip in a new virtualenv, I see:

  Gertty version: 1.4.0

Maybe you have an older copy installed from a git repo as editable or
something?  Perhaps try creating a new virtualenv for it, or
uninstalling it and re-installing?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
Masayuki Igawa <masayuki.ig...@gmail.com> writes:

> Hi!
>
> On Wed, Jul 27, 2016 at 11:50 PM, James E. Blair <cor...@inaugust.com> wrote:
>> MichaƂ Dulko <michal.du...@intel.com> writes:
>>
>>> Just wondering - were there tries to implement syntax highlighting in
>>> diff view? I think that's the only thing that keeps me from switching to
>>> Gertty.
>>
>> I don't know of anyone working on that, but I suspect it could be done
>> using the pygments library.
>
> Oh, it's an interesting feature to me :) I'll try to investigate and
> implement in next couple of days :)

As I think about this, one challenge in particular comes to mind: Gerrit
uses background color (green and pink) to distinguish old and new
text when displaying diffs.  In Gertty, I avoided that and used
foreground colors instead because text with green and red backgrounds is
difficult to read on a terminal.

We essentially have two channels of information that we want to
represent with color -- the diff, and the syntax.  They can sometimes
overlap.

Perhaps we could use a 256 color (or even RGB) terminal for this
feature.  Then we may be able to get just the right shade of background
color for the diff channel, and use the foreground colors for syntax
highlighting.

At any rate, it may be worth trying to solve *this* problem first with a
mockup to see if there is any way of doing this without making our eyes
bleed before working on the code to implement it.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-07-27 Thread James E. Blair
MichaƂ Dulko  writes:

> Just wondering - were there tries to implement syntax highlighting in
> diff view? I think that's the only thing that keeps me from switching to
> Gertty.

I don't know of anyone working on that, but I suspect it could be done
using the pygments library.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Gertty 1.4.0

2016-07-26 Thread James E. Blair
Announcing Gertty 1.4.0
===

Gertty is a console-based interface to the Gerrit Code Review system.

Gertty is designed to support a workflow similar to reading network
news or mail.  It syncs information from Gerrit to local storage to
support disconnected operation and easy manipulation of local git
repos.  It is fast and efficient at dealing with large numbers of
changes and projects.

The full README may be found here:

  https://git.openstack.org/cgit/openstack/gertty/tree/README.rst

Changes since 1.3.0:


* Gertty is now available in Gentoo.

* Large changes with many patchsets load faster.

* The change screen now displays changes which may conflict with the
  current change.

* Added support for a command socket so that external applications may
  request Gertty to open a change.  An example of how to configure
  Gerrit URLs to automatically open in Gertty when clicked in the
  unicode-rxvt terminal emulater is provided in the documentation.

* Added an optional vi-style keymap and navigation commands.

* Added "project topics" -- the ability to group projects in the
  project list.

* Added support for the process mark on the project list.

* Email addresses are displayed in the change screen.

* Added support for the "projects:" search term.

* Added support for searching by last-seen.  This can be used to
  create a dashboard of changes that have been recently viewed in
  Gertty.  See the example config files for how to set this up.

* The project and change lists are now searchable with the interactive
  search command.

* The change list now displays more columns if there is room.

* Added a navigation breadcrumb footer.

* Added a "Reply" button to the change screen to facilitate quoted
  replies to messages.

* When re-reviewing a change, the review dialog defaults to previous
  values.

* Added support for batch abandon and restore.

* Unified diff display now groups changed lines better.

* Added lockfile support to prevent multiple copies of Gertty from
  accessing the same database.

* Added support for form-based authentication.

* Added an option to specify the URL for cloning git repos.

* In the default keymap, the sorting commands now take two keystrokes
  (e.g., "Sn" for sort by number) to facilitate more sorting options.

* Multi-keystroke commands now display suggestion completions.

* Dashboards may now specify their default sorting option.

* Sphinx-based documentation now availablet at
  http://gertty.readthedocs.io/

* Added an option to disable mouse support.

* Several bug fixes related to:
  * Improved handling of abandoned related changes.
  * Fixed "too many SQL variables" error which occurred in large
projects.
  * Corrected ordering of test results.
  * Treat HTTP 503 responses as server-offline so that the action will
be retried later.
  * Handle additional python-requests SSL errors.
  * Better handle missing git commits.
  * Several Unicode fixes.
  * Support more recent versions of GitPython.
  * Better handle more than one change result when searching.
  * Fix a crash on permissions-only changes.
  * Fixes to support some changes in Gerrit 2.8 and 2.9.
  * Python 3 improvements.
  * Correct the display of comments at the start of a file.

Thanks to the following people whose changes are included in this
release:

  Christoph Gysin
  Cody A.W. Somerville
  Craige McWhirter
  David Shrewsbury
  Doug Hellmann
  Doug Wiegley
  Jan KundrĂĄt
  Jay Pipes
  Jim Rollenhagen
  K Jonathan Harker
  Martin André
  Matthew Oliver
  Matthew Thode
  Matthew Treinish
  Matthias Runge

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-05 Thread James E. Blair
Kashyap Chamarthy  writes:

> If it reduces nondeterministic spam for the CI Infra, and makes us
> achieve the task at hand, sure.  [/me need to educate himself a
> bit more on the Zuul pipeline infrastructure.]
>
> Worth filing this (and your 'idle pipeline' thought below) in the Zuul
> tracker here?
>
> https://storyboard.openstack.org/#!/project/679
>
>> In the past we've discussed the option of having an "idle pipeline"
>> which repeatedly runs specified jobs only when there are unused
>> resources available, so that it doesn't significantly cut into our
>> resource pool when we're under high demand but still allows to
>> automatically collect a large amount of statistical data.
>> 
>> Anyway, hopefully James Blair can weigh in on this, since Zuul is
>> basically in a feature freeze for a while to limit the number of
>> significant changes we'll need to forward-port into the v3 branch.
>> We'd want to discuss these new features in the context of Zuul v3
>> instead.

Yes, I think there is more that we can do around having specific jobs
run, and also more types of pipeline managers that understand load
conditions -- or at least more fine-grained priority specification so
they don't have to.  But I also think what Jeremy said is correct --
we're in the middle of a push toward Zuul v3 and need to stay focused on
that.  These are good suggestions with well articulated use-cases, so I
think adding this to the issue tracker for now so that we can address it
later is the thing to do.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upcoming changes now that Jenkins has retired

2016-06-16 Thread James E. Blair
Now that we have retired Jenkins, we have some upcoming changes:

* Console logs are now available via TCP

  The status page now has "telnet" protocol links to running jobs.  If
  you connect to the host and port specified in that link, you will be
  sent the console log for that job up to that point in time and it
  will continue to stream over that connection in real time.  If your
  browser doesn't understand "telnet://" URLs, just grab the host and
  port and type "telnet  " or better yet, "nc 
  " into your terminal.  You can also grep through in progress
  console logs with "nc   | grep ".

* Console logs will soon be available over the WWW

  Netcatting to Grep is cool, but sometimes if you're already in a
  browser, it may be easier to click on a link and have that just open
  up in your existing browser.  Monty has been working on a websocket
  interface to the console log stream that we hope to have in place
  soon.

* Zuul will stop using the name "Jenkins"

  There is a new user in Gerrit named "Zuul".  Zuul has been
  masquerading as Jenkins for the past few years, but now that we no
  longer run any software named "Jenkins" it is the right time to
  change the name to Zuul.  If you have any programs, scripts,
  dashboards, etc, that look for either the full name "Jenkins" or
  username "jenkins" from Gerrit, you should immediately update them
  to also use the full name "Zuul" or username "zuul" in order to
  prepare for the change.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] There is no Jenkins, only Zuul

2016-06-16 Thread James E. Blair
Since its inception, the OpenStack project has used Jenkins to perform
its testing and artifact building.  When OpenStack was two git repos,
we had one Jenkins master, a few slaves, and we configured all of our
jobs manually in the web interface.  It was easy for a new project
like OpenStack to set up and maintain.  Over the years, we have grown
significantly, with over 1,200 git repos and 8,000 jobs spread across
8 Jenkins masters and 800 dynamic slave nodes.  Long before we got to
this point, we could not manage all of those jobs by hand, so we wrote
Jenkins Job Builder[1], one of our more widely used projects, so that
we could automatically generate those 8,000 jobs from templated YAML.

We also wrote Zuul[2].

Zuul is a system to drive project automation.  It directs our testing,
running tens of thousands of jobs each day, responding to events from
our code review system and stacking potential changes to be tested
together.

We are working on a new version of Zuul (version 3) with some major
changes: we want to make it easier to run jobs in multi-node
environments, easier to manage large numbers of jobs and job
variations, support in-tree job configuration, and the ability to define
jobs using Ansible[3].

With Zuul in charge of deciding which jobs to run, and when and where
to run them, we use very few advanced features of Jenkins at this
point.  While we are still working on Zuul v3, we are at a point where
we can start to use some of the work we have done already to switch to
running our jobs entirely with Zuul.

As of today, we have turned off our last Jenkins master and all of our
automation is being run by Zuul.  It's been a great ride, and
OpenStack wouldn't be where it is today without Jenkins.  Now we're
looking forward to focusing on Zuul v3 and exploring the full
potential of project automation.

[1] http://docs.openstack.org/infra/jenkins-job-builder/
[2] http://docs.openstack.org/infra/zuul/
[3] https://www.ansible.com/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-19 Thread James E. Blair
Sean Dague  writes:

> Yes, that would let you see the results of an individual experimental
> run that is complete before they all return and post to the change. Once
> they are all done, they are listed on the change, so that's good enough.

We'll need a Zuul restart for this, so it may take another day or two,
but here are the changes:

  https://review.openstack.org/307891
  https://review.openstack.org/307892

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread James E. Blair
Sean Dague <s...@dague.net> writes:

> On 04/18/2016 11:22 AM, James E. Blair wrote:
>> Sean Dague <s...@dague.net> writes:
>> 
>>> Bummer. This gets used a to figure out the state of things given that
>>> zuul links to the console even after the job is complete. Changing that
>>> to the log server link would mitigate the blind spot.
>> 
>> Yeah, we know it's important, which is why we're working on getting it
>> back, but will take a little bit of time.  In the interim, rather than
>> linking to a dead URL, I removed the links from the status page
>> altogether.  However, if it would be better overall to link to the log
>> server (which will result in 404s until the logs are actually uploaded
>> at the end of the job), we could probably do that instead.  I'm sure
>> we'll get questions, but we could probably put a banner at the top of
>> the page and we may get slightly fewer of them.
>
> The links could be added only after the individual test run completes.
> That would mean no 404s right? But allow link access once their are
> results to be seen.

Yes we could do that -- though for the final job, you may need to watch
closely to grab the link before it disappears.  However, I guess in that
case, you can just grab them from the change, eh?

So -- the best plan is: no links on jobs names to start, then as each
individual job completes, switch the name to a link to the log url for
that job.  Yeah?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread James E. Blair
Sean Dague  writes:

> Bummer. This gets used a to figure out the state of things given that
> zuul links to the console even after the job is complete. Changing that
> to the log server link would mitigate the blind spot.

Yeah, we know it's important, which is why we're working on getting it
back, but will take a little bit of time.  In the interim, rather than
linking to a dead URL, I removed the links from the status page
altogether.  However, if it would be better overall to link to the log
server (which will result in 404s until the logs are actually uploaded
at the end of the job), we could probably do that instead.  I'm sure
we'll get questions, but we could probably put a banner at the top of
the page and we may get slightly fewer of them.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle summary part 1/6

2016-02-16 Thread James E. Blair
Jim Rollenhagen  writes:

> As a note, the asterisk setup that Infra provided was *fantastic*, and
> the virtual-ness of this midcycle is going better than I ever expected.
> Thanks again to the infra team for all that you do for us. <3

That's great to hear!

I'm looking forward to hearing what worked for you and what could be
improved.  I'm becoming a big fan of virtual sprints, but we've only
done a few of them so far.

I like the way you've set up the schedule and discrete sessions.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-02 Thread James E. Blair
Paul Michali  writes:

> Sounds interesting... the link
> https://docs.openstack.org/infra/git-restack/ referenced
> as the home page in PyPI is a broken link.

I'm clearly getting ahead of things.  The correct link is:

  http://docs.openstack.org/infra/git-restack/

Thanks,

Jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing a simple new tool: git-restack

2016-02-02 Thread James E. Blair
Hi,

I'm pleased to announce a new and very simple tool to help with managing
large patch series with our Gerrit workflow.

In our workflow we often find it necessary to create a series of
dependent changes in order to make a larger change in manageable chunks,
or because we have a series of related changes.  Because these are part
of larger efforts, it often seems like they are even more likely to have
to go through many revisions before they are finally merged.  Each step
along the way reviewers look at the patches in Gerrit and leave
comments.  As a reviewer, I rely heavily on looking at the difference
between patchsets to see how the series evolves over time.

Occasionally we also find it necessary to re-order the patch series, or
to include or exclude a particular patch from the series.  Of course the
interactive git rebase command makes this easy -- but in order to use
it, you need to supply a base upon which to "rebase".  A simple choice
would be to rebase the series on master, however, that creates
difficulties for reviewers if master has moved on since the series was
begun.  It is very difficult to see any actual intended changes between
different patch sets when they have different bases which include
unrelated changes.

The best thing to do to make it easy for reviewers (and yourself as you
try to follow your own changes) is to keep the same "base" for the
entire patch series even as you "rebase" it.  If you know how long your
patch series is, you can simply run "git rebase -i HEAD~N" where N is
the patch series depth.  But if you're like me and have trouble with
numbers other than 0 and 1, then you'll like this new command.

The git-restack command is very simple -- it looks for the most recent
commit that is both in your current branch history and in the branch it
was based on.  It uses that as the base for an interactive rebase
command.  This means that any time you are editing a patch series, you
can simply run:

  git restack

and you will be placed in an interactive rebase session with all of the
commits in that patch series staged.  Git-restack is somewhat
branch-aware as well -- it will read a .gitreview file to find the
remote branch to compare against.  If your stack was based on a
different branch, simply run:

  git restack 

and it will use that branch for comparison instead.

Git-restack is on pypi so you can install it with:

  pip install git-restack

The source code is based heavily on git-review and is in Gerrit under
openstack-infra/git-restack.

https://pypi.python.org/pypi/git-restack/1.0.0
https://git.openstack.org/cgit/openstack-infra/git-restack

I hope you find this useful,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.3.0

2015-12-18 Thread James E. Blair
MichaƂ Dulko  writes:

> As this was sent to openstack-dev list, I'll ask a Gertty usage question
> here. Was anyone able to use Gertty successfully behind a proxy? My
> environment doesn't allow any traffic outside the proxy and I haven't
> noticed a config option to set it up.

Gertty uses the python requests library for HTTP transfer which supports
the standard HTTP_PROXY and HTTPS_PROXY environment variables.  See this
documentation:

  http://docs.python-requests.org/en/latest/user/advanced/#proxies

So you might try setting those variables, though I have not tested this.
If it works, we should probably add some documentation suggesting that.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Gertty 1.3.0

2015-12-17 Thread James E. Blair
Announcing Gertty 1.3.0
===

Gertty is a console-based interface to the Gerrit Code Review system.

Gertty is designed to support a workflow similar to reading network
news or mail.  It syncs information from Gerrit to local storage to
support disconnected operation and easy manipulation of local git
repos.  It is fast and efficient at dealing with large numbers of
changes and projects.

The full README with installation instructions may be found here:

  https://git.openstack.org/cgit/openstack/gertty/tree/README.rst

Changes since 1.2.0:


* Moved the git repo to git.openstack.org/openstack/gertty.

* Updated commit message editing to work with API versions >= 2.11.

* Added interactive search in diff view (C-s by default).

* Added a simple kill-ring (cut/paste buffer) (C-k / C-y by default).

* Added support for multiple keystroke commands.

* Added bulk edit of topics from the change list view.

* Added a refine-search command (M-o by default) which will pre-fill the
  search dialog with the current query.

* Made the permalink selectable.

* Added support for '-' as a negation operator in queries.

* Fixed a bug syncing changes with comments on a file not actually in
  the revision.

* Fixed a collision in the default key binding (r is review, R is
  reverse sort).

* Fixed identification of internal links where Gerrit is hosted on the
  same host as another service.

Thanks to the following people whose changes are included in this
release:

  Alex Schultz
  Clint Adams
  David Stanek
  James Polley
  Jeremy Stanley
  Paul Bourke
  Sean M. Collins
  Sirushti Murugesan
  Wouter van Kesteren

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-17 Thread James E. Blair
Sean Dague  writes:

> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.

I agree it might be interesting, but why is it better than having a
dedicated cross-project meeting channel?  If your idea is successful,
then periodically the -dev channel will be unavailable for general
(cross-project!) developer communication.  Why not make a cross-project
meeting channel?  It's not hard to make new meeting channels and its not
hard to join them.  Is the idea to try to re-invigorate the -dev channel
by holding meetings there?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-17 Thread James E. Blair
Sean Dague  writes:

> Given that it's on track for getting accepted upstream, it's probably
> something we could cherry pick into our deploy, as it won't be a
> divergence issue.

Once it merges upstream, yes.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Infra] HA deployment tests on nodepool

2015-11-16 Thread James E. Blair
Aleksandra Fedorova  writes:

> Hi, everyone,
>
> 
>
> in Fuel project we run two HA deployment tests for every commit in
> openstack/fuel-library repository. Currently these tests are run by
> Fuel CI - the standalone Third-Party CI system, which can vote on
> openstack/fuel* projects. But we'd like it to be the part of the gate.
> As these tests require several vms to be available for the test run,
> we'd like to use nodepool multinode support for that.
>
> 



Cool!  In the long run I think this is technically possible and I think
we are at a stage where you can start to make forward progress, but the
current state is not plug-and-play because a lot of the work so far has
been focused on devstack.  It will be great to have more people looking
at this and helping us make it generic.



> How this can be addressed in nodepool
> =
>
> The nodepool driver approach
> 
>
> fuel-devops is essentially a wrapper and vm's manager, and it was
> originally planned as a tool which can use multiple backends, taking
> libvirt as a default one. There is an still-on-discussion task to
> implement the 'bare-metal driver' for fuel-devops, which would make it
> possible to use vm's from different servers for one particular test
> run.
>
> We can consider implementing nodepool as a driver, so it provides
> vm's, which then are wrapped by fuel-devops and are sent further to
> fuel-qa framework.
>
> Then to run the test we would need a 'manager vm' where fuel-devops
> code is executed, and several empty nodes from nodepool. We'd register
> those empty nodes in fuel-devops database and run the test as usual.

We want our nodes to be supplied by Nodepool and our jobs to be
controlled by Zuul -- we don't want to have a node provisioning system
specific to a single kind of job.  In Zuulv3 [1] we are proposing to make
this more flexible, but still want to keep with that approach.  Your
next suggestion looks more compatible.

[1] http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html

> No fuel-devops approach
> ---
>
> Direct approach would be to use the nodepool's service for pre-built images.
>
> Given by Fuel ISO image, we regularly generate one vm with deployed
> Fuel node (step 1.), and one with the basic node deployed with
> bootstrap image (step 2.). This images are stored in Glance or another
> storage as usual.
>
> Then for each fuel-library test we request 1 Fuel node and several
> basic nodes, and then operate on them directly without wrappers.
>
> For this scenario to work we need to change a lot in fuel-qa code. But
> this approach seems to be aligned with the initiative [8], which is
> currently in development: if we manage to describe devops environments
> in YAML files, we'd probably be able to map these descriptions to the
> multinode configurations, and then support them in nodepool.

If you need this to boot in a top-level VM (rather than a VM inside of a
VM, which is what happens for trove jobs and others like it), then yes,
this might work as a custom image type for nodepool.

We are trying to reduce the number of custom images we have in nodepool,
but perhaps this is a compelling reason to add one.

Note that right now nodepool multi-node support only lets us get groups
of nodes from identical images.  I think that can change with the Zuulv3
work, so it's good to have potential requirements like this as we get
started on that.

On another subject, you might also want to look at the devstack
multinode documentation[2].

It's obviously very devstack specific, but we might want to pull that
out of devstack and put it in nodepool ready scripts or something similar.

[2] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/multinode_setup_info.txt

> Side Question
> =
>
> Can we build package from a change request so that package is then
> used in test? Are there any best practices?

You can start a job by building a package and then provide it by
building a local package archive.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Report from Gerrit User Summit

2015-11-09 Thread James E. Blair
Khai Do and I attended the Gerrit User Summit this weekend.  It was a
very busy weekend indeed with quite a lot of activity in all areas
related to Gerrit.  The following is a brief summary of items which
may be of interest to our community.

--==--

David Pursehouse from Sony Mobile spoke about what's new in 2.11 and
coming in 2.12:

* Inline editor.  There will be an inline editor so that edits to
  commits can be made through the web interface.

* Better file sorting for C/C++ (headers listed before source).  We
  may want to look into the mechanism used for this to see if it can
  be applied to other situations, such as listing test files before or
  after implementation.

* Submit changes with ancestors.  When the tip of a branch is
  submitted and all of its ancestors are submittable, they will all be
  submitted together.  If ancestors are not submittable, the change at
  the tip will not be submittable.  This removes the submit queue and
  the "Submitted, merge pending" state from Gerrit and generally makes
  submission an immediate atomic operation.  This is an improvement
  that will make several edge cases we encounter simpler, however,
  because Zuul handles submission and does so using strict ordering,
  this change will mostly be invisible in the OpenStack environment.

* Submit whole topic.  This is an implementation of the much-requested
  feature to merge changes to multiple repositories simultaneously.

  This uses the "topic" field to designate changes that should be
  merged simultaneously.  When this feature is enabled, the only
  "submit" option for a change which shares a topic with other changes
  will be "Submit whole topic".  There is a panel on the change screen
  that indicates which changes will be submitted together with the
  current one.  There was some discussion that this limits options --
  a user may not want to submit all changes on a topic at once, and
  unrelated changes may inadvertently end up sharing a topic,
  especially in busy systems or if a poor topic name is chosen (eg
  "test"), and that it formalizes one particular use of the "topic"
  field which to this point has been more free-form.  The authors are
  interested in getting an early version of this out for feedback, but
  acknowledge they have not considered all use-cases yet and may need
  to revise it.

  In the OpenStack community, we have decided to avoid pursuing this
  kind of feature because the alternative -- strict sequencing of
  co-dependent changes -- provides for better upgrade code and is more
  friendly to continuous deployers.  This feature can be disabled, so
  I anticipate we would do so and we would notice no substantial
  changes because of this.  Of course, if we want to revisit our
  decision, we could do so.

* Option to require all commits pushed be GPG signed.

* Search by author or committer.  Also, search by comment author.

* As noted in another recent thread by Khai, the hashtags support
  (user-defined tags applied to changes) exists but depends on notedb
  which is not ready for use yet (targeted for 3.0 which is probably
  at least 6 months off).

--==--

Shane McIntosh from McGill University presented an overview of his
research into the efficacy of code review.  The data he studied
include several open source projects including OpenStack.  His
papers[1] are online, but some quick highlights from his research:

* Modules with a high percentage of review-focused developers are less
  likely to be defective.
* There is a sweet spot around two reviewers, where more reviewers are
  less likely to find more defects.

And some tidbits from other researchers:

* Older code is less defect prone (Graves, et al, TSE 2000)
* Code with weak ownership is more defect prone (Bird, et al,
  ESEC/FSE 2011)

[1] http://shanemcintosh.org/tags/code-review.html

--==--

There were a litany of presentations about Gerrit installations.  I
believe we may be one of the larger public Gerrit users, but we are
not even remotely near the large end of the scale when private
installations are considered.  Very large installations can be run on
a single large instance.  Many users are able to use a master-slave
configuration to spread load.  Perhaps only Google is running a
multi-master system, though they utilize secret Google-only
technology.  It is possible, likely even with open-source components,
but would require substantial customized code.  It is likely that the
notedb work in Gerrit 3.0 will simplify this.

--==--

I gave a short presentation on Gertty.  The authors of the Gerrit REST
API were happy to see that it could support something like Gertty.

--==--

Johannes Nicolai of CollabNet presented a framework for tuning Gerrit
parameters, and produced a handout[2] as a guideline.

It was noted that the Gerrit documentation recommends disabling
connection pooling with MySQL.  This is apparently because of bad
experiences with the MySQL server dropping idle connections.  Since we
have addressed 

[openstack-dev] Announcing a new library: requestsexceptions

2015-11-04 Thread James E. Blair
Hi,

I'm pleased to announce the availability of a new micro-library named
requestsexceptions.  Now the task of convincing the requests library
not to fill up your filesystem with warnings about SSL requests has
never been easier!

Over in infra-land, we use the requests library a lot, whether it's
gertty talking to Gerrit or shade talking to OpenStack, and we love
using it.  It's a pleasure.  Except for two little things.

Requests is in the middle of a number of unfortunate standoffs.  It is
attempting to push the bar on SSL security by letting us all know when
a request is substandard in some way -- whether that is because a
certificate is missing a subject alternate name field, or the version
of Python in use is missing the latest SSL features.

This is great, but in many cases a user of requests is unable to
address any of the underlying causes of these warnings.  For example,
try as we might, public cloud providers are still using non-SAN
certificates.  And upgrading python on a system (or even the
underlying ssl module) is often out of the question.

Requests has a solution to this -- a simple recipe to disable specific
warnings when users know they are not necessary.

This is when we run into another standoff.

Requests is helpfully packaged in many GNU/Linux distributions.
However, the standard version of requests bundles the urllib3 library.
Some packagers have *unbundled* the urllib3 library from requests and
cause it to use the packaged version of urllib3.  This would be a
simple matter for the packagers and requests authors to argue about
over beer at PyCon, except if you want to disable a specific warning
rather than all warnings you need to import the specific urllib3
exceptions that requests uses.  The import path for those exceptions
will be different depending on whether urllib3 is bundled or not.

This means that in order to find a specific exception in order to
handle a requests warning, code like this must be used:

  try:
  from requests.packages.urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  try:
  from urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  InsecurePlatformWarning = None

The requestsexceptions library handles that for you so that you can
simply type:

  from requestsexepctions import InsecurePlatformWarning
  
We have just released requestsexceptions to pypi at version 1.1.1, and
proposed it to global requirements.  You can find it here:

  https://pypi.python.org/pypi/requestsexceptions
  https://git.openstack.org/cgit/openstack-infra/requestsexceptions

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Do not modify (or read) ERROR_ON_CLONE in devstack gate jobs

2015-09-24 Thread James E. Blair
Hi,

Recently we noted some projects modifying the ERROR_ON_CLONE environment
variable in devstack gate jobs.  It is never acceptable to do that.  It
is also not acceptable to read its value and alter a program's behavior.

Devstack is used by developers and users to set up a simple OpenStack
environment.  It does this by cloning all of the projects' git repos and
installing them.

It is also used by our CI system to test changes.  Because the logic
regarding what state each of the repositories should be in is
complicated, that is offloaded to Zuul and the devstack-gate project.
They ensure that all of the repositories involved in a change are set up
correctly before devstack runs.  However, they need to be identified in
advance, and to ensure that we don't accidentally miss one, the
ERROR_ON_CLONE variable is checked by devstack and if it is asked to
clone a repository because it does not already exist (i.e., because it
was not set up in advance by devstack-gate), it fails with an error
message.

If you encounter this, simply add the missing project to the $PROJECTS
variable in your job definition.  There is no need to detect whether
your program is being tested and alter its behavior (a practice which I
gather may be popular but is falling out of favor).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-14 Thread James E. Blair
Thierry Carrez  writes:

> Doug Hellmann wrote:
>> [...]
>> 1. Resolve the situation preventing the DefCore committee from
>>including image upload capabilities in the tests used for trademark
>>and interoperability validation.
>> 
>> 2. Follow through on the original commitment of the project to
>>provide an image API by completing the integration work with
>>nova and cinder to ensure V2 API adoption.
>> [...]
>
> Thanks Doug for taking the time to dive into Glance and to write this
> email. I agree with your top two priorities as being a good summary of
> what the "rest of the community" expects the Glance leadership to focus
> on in the very short term.

Agreed and thanks.  I'm also excited by the conversation this has
prompted and am optimistic that we will have agreement at the summit on
a way forward.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime on Friday 2015-09-11 at 23:00 UTC

2015-09-11 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> On Friday, September 11 at 23:00 UTC Gerrit will be unavailable for
> about 30 minutes while we rename some projects.
>
> Existing reviews, project watches, etc, should all be carried
> over.

This has been completed without incident.

>  Currently, we plan on renaming the following projects:
>
>   stackforge/os-ansible-deployment -> openstack/openstack-ansible
>   stackforge/os-ansible-specs -> openstack/openstack-ansible-specs
>
>   stackforge/solum -> openstack/solum
>   stackforge/python-solumclient -> openstack/python-solumclient
>   stackforge/solum-specs -> openstack/solum-specs
>   stackforge/solum-dashboard -> openstack/solum-dashboard
>   stackforge/solum-infra-guestagent -> openstack/solum-infra-guestagent
>
>   stackforge/magnetodb -> openstack/magnetodb
>   stackforge/python-magnetodbclient -> openstack/python-magnetodbclient
>   stackforge/magnetodb-specs -> openstack/magnetodb-specs
>
>   stackforge/kolla -> openstack/kolla
>   stackforge/neutron-powervm -> openstack/networking-powervm

And we also moved these:

stackforge/os-ansible-deployment -> openstack/openstack-ansible
stackforge/os-ansible-deployment-specs -> openstack/openstack-ansible-specs

stackforge/refstack -> openstack/refstack
stackforge/refstack-client -> openstack/refstack-client

Thanks to everyone that pitched in to help the move go smoothly.

As a reminder, we expect this to be the last move of projects from
stackforge into openstack before we retire the stackforge/ namespace as
previously announced [1].

-Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] PTL non-candidacy

2015-09-10 Thread James E. Blair
Hi,

I've been the Infrastructure PTL for some time now and I've been
fortunate to serve during a time when we have not only grown the
OpenStack project to a scale that we only hoped we would attain, but
also we have grown the Infrastructure project itself into truly
uncharted territory.

Serving as a PTL is a very rewarding experience that takes a good deal
of time and attention.  I would like to focus my time and energy on
diving deeper into technical projects, including quite a bit of work
that I would like to accomplish on Zuul, so I do not plan to run for PTL
in the next cycle.

Fortunately there are people in our community that have broad
involvement with all aspects of the Infrastructure project and we have
no shortage of folks who like interacting with and supporting others in
their work.  I wish whoever follows the best of luck while I look
forward to writing some code.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime on Friday 2015-09-11 at 23:00 UTC

2015-08-27 Thread James E. Blair
On Friday, September 11 at 23:00 UTC Gerrit will be unavailable for
about 30 minutes while we rename some projects.

Existing reviews, project watches, etc, should all be carried
over. Currently, we plan on renaming the following projects:

  stackforge/os-ansible-deployment - openstack/openstack-ansible
  stackforge/os-ansible-specs - openstack/openstack-ansible-specs

  stackforge/solum - openstack/solum
  stackforge/python-solumclient - openstack/python-solumclient
  stackforge/solum-specs - openstack/solum-specs
  stackforge/solum-dashboard - openstack/solum-dashboard
  stackforge/solum-infra-guestagent - openstack/solum-infra-guestagent

  stackforge/magnetodb - openstack/magnetodb
  stackforge/python-magnetodbclient - openstack/python-magnetodbclient
  stackforge/magnetodb-specs - openstack/magnetodb-specs

  stackforge/kolla - openstack/kolla
  stackforge/neutron-powervm - openstack/networking-powervm

This list is subject to change.

The projects in this list have recently become official OpenStack
projects and many of them have been waiting patiently for some time to
be moved from stackforge/ to openstack/.  This is likely to be the last
of the so-called big-tent moves as we plan on retiring the stackforge/
namespace and moving most of the remaining projects into openstack/ [1].

If you have any questions about the maintenance, please reply here or
contact us in #openstack-infra on Freenode.

-Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stackforge migration on October 17; action required for stackforge projects

2015-08-27 Thread James E. Blair
Hi,

In a previous message[1] I described a plan for moving projects in the
stackforge/ git namespace into openstack/.

We have scheduled this migration for Saturday October 17, 2015.

If you are responsible for a stackforge project, please visit the
following wiki page as soon as possible and add your project to one of
the two lists there:

  https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement

We would like to have a list of all projects which are still active and
wish to be moved, as well as a list of projects that are no longer
maintained and should be retired.

After that, no further action is required -- the Infrastructure team
will handle the system configuration changes needed to effect the move,
however, you may wish to be available shortly after the move to merge
.gitreview changes and fixes related to any unanticipated problems.

Thanks,

Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Migrating existing projects in the stackforge namespace

2015-08-14 Thread James E. Blair
Hi,

As mentioned previously[1], we are retiring the stackforge/ namespace
for git repositories and creating new projects in openstack/.  This is
largely a cosmetic change and does not change the governance model for
new projects.

As part of this we want to move all of the projects that are currently
in the stackforge/ namespace into openstack/ to make it easier for them
to become official OpenStack projects in the future while reducing the
impact to the community that the current practice of sporadic renaming
causes.

To that end, I propose the following process:

1) We choose a date upon which to perform a mass migration of all
stackforge/ projects into openstack/.

I suggest either October 17 or November 7 (both Saturdays), as least
likely to interfere with the release or summit.

2) We create a wiki page for all such projects to either sign up for
that migration or indicate that they are no longer maintained.

3) Any stackforge projects that do not sign up for the migration within
a certain time are placed on the list of projects that are no longer
maintained.

4) We attempt to contact, by way of posts to the openstack-dev mailing
list, announcements at the cross project meeting, and direct emails to
the individuals who initially requested repository creation, people who
might be responsible for projects which have not responded and ensure
that they have a chance to respond.  We will freeze the list of projects
and portions of the project-config repository several days before the
migration, to facilitate creating and reviewing the necessary change.

5) On the migration date, the Infrastructure team will move all of the
projects at once.  We will generate the changes needed to do so
automatically, individual projects will not need to do anything except
approve .gitreview changes and possibly help fix any CI jobs that break
as a result of the moves.

6) For the projects that are no longer maintained, we will merge changes
to them that indicate that and make them read-only.

We will schedule a move in early September for the projects that have
already requested moves as part of becoming official OpenStack projects.
Please don't propose any more changes to move projects before the mass
migration.

While most new projects are being created directly in the openstack/
namespace, we will continue to create additional git repositories
associated with existing projects in the stackforge/ namespace so that
the constituent repositories associated with those projects are not
split across namespaces.  We will happily move those projects along with
the rest as part of the mass migration.

Please reply with any feedback or suggested improvements to this plan.
If we can achieve consensus on the approach, we will make further
announcements as to specifics soon.

Thanks,

Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071816.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stackforge namespace retirement

2015-08-14 Thread James E. Blair
Fox, Kevin M kevin@pnnl.gov writes:

 What is the process for current stackforge projects to move into the
 openstack namespace then? Is it a simple request now, or a more
 complicated process?

Great question.  I have proposed a process for that in a new thread:

http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stackforge namespace retirement

2015-08-11 Thread James E. Blair
Hi,

The TC has recently been considering what the big tent means for
Stackforge.  In short, very little will be changing, except that we will
now start putting projects previously destined for stackforge/ in the
openstack/ git namespace.

The big tent is a recognition that there are a lot of projects in our
community that contribute to the OpenStack mission statement.  After we
simplified the procedure for joining OpenStack as an official project,
we have seen a large number of projects from Stackforge join, and
therefore, we have spent a good deal of time moving the git repositories
for those projects from stackforge/ to openstack/.

While we often say move, that's really a misnomer.  It's easy to mv
one directory to another on a filesystem, but moving a hosted git
repository is not so easy.  For all intents it is actually a rename of
the project.  Even though the last part of the URL is the same, the
middle part is not, which means that every user or developer must deal
with that in some way.  In some cases, automated redirects may alleviate
the immediate pain, but that doesn't work in all cases, and eventually
documentation, configurations, and scripts must be updated or risk
becoming confusing or out of date.  In short, it's very disruptive for
each project that undertakes this renaming process.  It is also a
significant burden on the Infrastructure team which has to do quite a
bit of work to move each and every repository, and do so during a
maintenance window as many of our tools were not designed to cope with
renaming projects on-line.

After quite a bit of discussion, the TC decided that it didn't want to
change anything about the Stackforge program at all, except to remove
this speed-bump for projects joining OpenStack officially.  Any project
related to OpenStack that wants to be a part of our community and use
our project infrastructure, whether ultimately destined to be an
official OpenStack project or not, is welcome just as before.  The
difference is that now, instead of creating the project as
stackforge/foo we will create it as openstack/foo.  That doesn't
mean the project is an official OpenStack project -- that is decided by
the TC, and the list of official projects is maintained here:

  
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

and published here:

  http://governance.openstack.org/reference/projects/index.html

This may be disconcerting to those of us who are used to reading
openstack/foo and thinking that is the badge of office for an
OpenStack project.  That is a convenient shortcut, but in the long run,
encoding part of the software development life-cycle of a project in
something so unrelated and difficult to change as the name of the git
repository is unnecessary and burdensome for everyone.

With this change, openstack/ will contain the projects that are
produced by the OpenStack community at large.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >