Re: [openstack-dev] [Mistral]I think Mistral need K8S action

2018-04-12 Thread 홍선군
Thanks for your reply.

 

I will continue to pay attention.

 

Regards,

Xian Jun Hong

 

From: Renat Akhmerov  
Sent: Friday, April 13, 2018 1:48 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Mistral]I think Mistral need K8S action

 

Hi, 

 

I completely agree with you that having such an action would be useful. 
However, I don’t think this kind of action should be provided by Mistral out of 
the box. Actions and triggers are integration pieces for Mistral and are 
natively external to Mistral code base. In other words, this action can be 
implemented anywhere and plugged into a concrete Mistral installation where 
needed.

 

As a home for this action I’d propose 'mistral-extra’ repo where we are 
planning to move OpenStack actions and some more.   

  

Also, if you’d like to contribute you’re very welcome.

 


Thanks

Renat Akhmerov
@Nokia


On 13 Apr 2018, 09:18 +0700,  >, wrote:



Hello  Mistral team.

I'm doing some work on the K8S but I observed that there is only Docker's 
action in Mistral.

I would like to ask Mistral Team, why there is no K8S action in the mistral.

I think it would be useful in Mistral.

If you feel it's necessary, could I add K8S action to mistral?

 

Regards,

Xian Jun Hong

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral]I think Mistral need K8S action

2018-04-12 Thread Renat Akhmerov
Hi,

I completely agree with you that having such an action would be useful. 
However, I don’t think this kind of action should be provided by Mistral out of 
the box. Actions and triggers are integration pieces for Mistral and are 
natively external to Mistral code base. In other words, this action can be 
implemented anywhere and plugged into a concrete Mistral installation where 
needed.

As a home for this action I’d propose 'mistral-extra’ repo where we are 
planning to move OpenStack actions and some more.
Also, if you’d like to contribute you’re very welcome.


Thanks

Renat Akhmerov
@Nokia

On 13 Apr 2018, 09:18 +0700, 홍선군 , wrote:
> Hello  Mistral team.
> I'm doing some work on the K8S but I observed that there is only Docker's 
> action in Mistral.
> I would like to ask Mistral Team, why there is no K8S action in the mistral.
> I think it would be useful in Mistral.
> If you feel it's necessary, could I add K8S action to mistral?
>
> Regards,
> Xian Jun Hong
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sig][upgrades] Upgrade SIG IRC meeting poll

2018-04-12 Thread Luo, Lujin
Hello everyone,

Sorry for keeping you waiting! 

Since we have launched Upgrade SIG [1], we are now happy to invite everyone who 
is interested to take a vote so that we can find a good time for our regular 
IRC meeting.

Please kindly look at the weekdays in the poll only, not the actual date.

Odd week: https://doodle.com/poll/q8qr9iza9kmwax2z
Even week: https://doodle.com/poll/ude4rmacmbp4k5xg

We expect to alternate meeting times between odd and even weeks to cover 
different time zones. 

We'd love that if people can vote before Apr. 22nd.

Best,
Lujin

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128426.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Initiate the discussion for FPGA reconfigurability

2018-04-12 Thread Li Liu
Hi Team,

While wrapping up spec for FPGA programmability, I think we still miss
the reconfigurability part of Accelerators

For instance, in the FPGA case, after the bitstream is loaded, a user might
still need to tune the clock frequency, VF numbers, do reset, etc. These
reconfigurations can be arbitory. Unfortunately, none of the APIs we have
right can handle them properly.

I suggest having another spec for a couple of new APIs dedicated
to reconfiguring accelerators.

1. A rest API
2. A driver API

I want to gather more ideas from you guys especially from our vendor folks
:)



-- 
Thank you

Regards

Li Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova]

2018-04-12 Thread 李俊波
Hello Nova, Cinder developers,

 

I would like to ask you a question concerns a Cinder patch [1].

 

In this patch, it mentioned that RBD features were incompatible with
multi-attach, which disabled multi-attach for RBD. I would like to know
which RBD features that are incompatible?

 

In the Bug [2], yao ning also raised this question, and in his envrionment,
it proved that they did not find ant problems when enable this feature.

 

So, I also would like to know which features in ceph will make this feature
unsafe? 

 

[1] https://review.openstack.org/#/c/283695/

[2] https://bugs.launchpad.net/cinder/+bug/1535815

 

 

Best wishes and Regards

junboli

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral]I think Mistral need K8S action

2018-04-12 Thread 홍선군
Hello  Mistral team.

I'm doing some work on the K8S but I observed that there is only Docker's
action in Mistral.

I would like to ask Mistral Team, why there is no K8S action in the mistral.

I think it would be useful in Mistral.

If you feel it's necessary, could I add K8S action to mistral?

 

Regards,

Xian Jun Hong

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] meeting log from today 2018-04-12 at 21:00 UTC

2018-04-12 Thread melanie witt

Howdy everyone,

The meetbot was restarted in the middle of our meeting, so the log and 
minutes could not be collected (after the restart) and will not be found 
at http://eavesdrop.openstack.org/meetings/nova/2018/.


Here's a link to the #openstack-meeting channel log for the meeting if 
you are looking for the minutes:


http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2018-04-12.log.html#t2018-04-12T21:00:21

Cheers,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Save $500 on OpenStack Summit Vancouver Hotel + Ticket

2018-04-12 Thread Allison Price
Hi everyone, 

For a limited time, you can now purchase a discounted package including a 
Vancouver Summit ticket and hotel stay at the beautiful Pan Pacific Hotel for 
savings of more than $500 USD! 

This discount runs until April 25 pending availability - book your ticket & 
hotel room now for maximum savings: 

4-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $1,859 
USD—$500 in savings per person
5-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $2,149 
USD—$550 in savings per person

REGISTER HERE 

After you've registered we will book your hotel room for you, and follow-up 
with your confirmed hotel information in early May.

Please email sum...@openstack.org  if you have any 
questions. 

Cheers,
Allison

Allison Price
OpenStack Foundation
alli...@openstack.org




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Eric Fried
> Is avoiding three lines of code really worth making future cleanup
> harder? Is a three line change really blocking "an approved blueprint
> with ready code"?

Nope.  What's blocking is deciding that that's the right thing to do.
Which we clearly don't have consensus on, based on what's happening in
this thread.

> global ironic
> if ironic is None:
> ironic = importutils.import_module('ironicclient')

I have a pretty strong dislike for this mechanism.  For one thing, I'm
frustrated when I can't use hotkeys to jump to an ironicclient method
because my IDE doesn't recognize that dynamic import.  I have to go look
up the symbol some other way (and hope I'm getting the right one).  To
me (with my bias as a dev rather than a deployer) that's way worse than
having the 704KB python-ironicclient installed on my machine even though
I've never spawned an ironic VM in my life.

It should also be noted that python-ironicclient is in
test-requirements.txt.

Not that my personal preference ought to dictate or even influence what
we decide to do here.  But dynamic import is not the obviously correct
choice.

-efried

On 04/12/2018 03:28 PM, Michael Still wrote:
> I don't understand why you think the alternative is so hard. Here's how
> ironic does it:
> 
>         global ironic
> 
>         if ironic is None:
> 
>             ironic = importutils.import_module('ironicclient')
> 
> 
> Is avoiding three lines of code really worth making future cleanup
> harder? Is a three line change really blocking "an approved blueprint
> with ready code"?
> 
> Michael
> 
> 
> 
> On Thu, Apr 12, 2018 at 10:42 PM, Eric Fried  > wrote:
> 
> +1
> 
> This sounds reasonable to me.  I'm glad the issue was raised, but IMO it
> shouldn't derail progress on an approved blueprint with ready code.
> 
> Jichen, would you please go ahead and file that blueprint template (no
> need to write a spec yet) and link it in a review comment on the bottom
> zvm patch so we have a paper trail?  I'm thinking something like
> "Consistent platform-specific and optional requirements" -- that leaves
> us open to decide *how* we're going to "handle" them.
> 
> Thanks,
> efried
> 
> On 04/12/2018 04:13 AM, Chen CH Ji wrote:
> > Thanks for Michael for raising this question and detailed information
> > from Clark
> >
> > As indicated in the mail, xen, vmware etc might already have this kind
> > of requirements (and I guess might be more than that) ,
> > can we accept z/VM requirements first by following other existing ones
> > then next I can create a BP later to indicate this kind
> > of change request by referring to Clark's comments and submit patches to
> > handle it ? Thanks
> >
> > Best Regards!
> >
> > Kevin (Chen) Ji 纪 晨
> >
> > Engineer, zVM Development, CSTL
> > Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com 
> 
> > Phone: +86-10-82451493
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> > District, Beijing 100193, PRC
> >
> > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
> > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018
> > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: >
> >
> > From: Matt Riedemann >
> > To: openstack-dev@lists.openstack.org
> 
> > Date: 04/12/2018 08:46 AM
> > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform
> > specific, dependancies in requirements.txt
> >
> >
> 
> >
> >
> >
> > On 4/11/2018 5:09 PM, Michael Still wrote:
> >>
> >>
> >
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4=
> 
> 
>  proposes
> > adding a z/VM specific
> >> dependancy to nova's requirements.txt. When I objected the counter
> >> argument is that we have examples of windows specific dependancies
> >> (os-win) and powervm specific dependancies in that file already.
> >>
> >> I think perhaps all three are a mistake and should be removed.
> >>
> >> My recollection is that for drivers like ironic which may not be
> >> deployed by everyone, we have the dependancy documented, and then
> loaded
>   

Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Michael Still
I don't understand why you think the alternative is so hard. Here's how
ironic does it:

global ironic

if ironic is None:

ironic = importutils.import_module('ironicclient')

Is avoiding three lines of code really worth making future cleanup harder?
Is a three line change really blocking "an approved blueprint with ready
code"?

Michael



On Thu, Apr 12, 2018 at 10:42 PM, Eric Fried  wrote:

> +1
>
> This sounds reasonable to me.  I'm glad the issue was raised, but IMO it
> shouldn't derail progress on an approved blueprint with ready code.
>
> Jichen, would you please go ahead and file that blueprint template (no
> need to write a spec yet) and link it in a review comment on the bottom
> zvm patch so we have a paper trail?  I'm thinking something like
> "Consistent platform-specific and optional requirements" -- that leaves
> us open to decide *how* we're going to "handle" them.
>
> Thanks,
> efried
>
> On 04/12/2018 04:13 AM, Chen CH Ji wrote:
> > Thanks for Michael for raising this question and detailed information
> > from Clark
> >
> > As indicated in the mail, xen, vmware etc might already have this kind
> > of requirements (and I guess might be more than that) ,
> > can we accept z/VM requirements first by following other existing ones
> > then next I can create a BP later to indicate this kind
> > of change request by referring to Clark's comments and submit patches to
> > handle it ? Thanks
> >
> > Best Regards!
> >
> > Kevin (Chen) Ji 纪 晨
> >
> > Engineer, zVM Development, CSTL
> > Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> > Phone: +86-10-82451493
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> > District, Beijing 100193, PRC
> >
> > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
> > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018
> > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: >
> >
> > From: Matt Riedemann 
> > To: openstack-dev@lists.openstack.org
> > Date: 04/12/2018 08:46 AM
> > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform
> > specific, dependancies in requirements.txt
> >
> > 
> >
> >
> >
> > On 4/11/2018 5:09 PM, Michael Still wrote:
> >>
> >>
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_523387=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=
> 212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=
> CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4= proposes
> > adding a z/VM specific
> >> dependancy to nova's requirements.txt. When I objected the counter
> >> argument is that we have examples of windows specific dependancies
> >> (os-win) and powervm specific dependancies in that file already.
> >>
> >> I think perhaps all three are a mistake and should be removed.
> >>
> >> My recollection is that for drivers like ironic which may not be
> >> deployed by everyone, we have the dependancy documented, and then loaded
> >> at runtime by the driver itself instead of adding it to
> >> requirements.txt. This is to stop pip for auto-installing the dependancy
> >> for anyone who wants to run nova. I had assumed this was at the request
> >> of the deployer community.
> >>
> >> So what do we do with z/VM? Do we clean this up? Or do we now allow
> >> dependancies that are only useful to a very small number of deployments
> >> into requirements.txt?
> >
> > As Eric pointed out in the review, this came up when pypowervm was added:
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_438119_5_requirements.txt=DwIGaQ=
> jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=
> 212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=iyKxF-
> CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg=
> >
> > And you're asking the same questions I did in there, which was, should
> > it go into test-requirements.txt like oslo.vmware and
> > python-ironicclient, or should it go under [extras], or go into
> > requirements.txt like os-win (we also have the xenapi library now too).
> >
> > I don't really think all of these optional packages should be in
> > requirements.txt, but we should just be consistent with whatever we do,
> > be that test-requirements.txt or [extras]. I remember caring more about
> > this back in my rpm packaging days when we actually tracked what was in
> > requirements.txt to base what needed to go into the rpm spec, unlike
> > Fedora rpm specs which just zero out requirements.txt and depend on
> > their own knowledge of what needs to be installed (which is sometimes
> > lacking or lagging master).
> >
> > I also seem to remember that [extras] was less than user-friendly for
> > some reason, but maybe that was just because of how our CI jobs are
> > setup? Or I'm just making that up. I know it's pretty simple to install
> > the stuff from extras 

Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2018-04-12 13:54:46 -0500:
> On 04/12/2018 11:27 AM, Clark Boylan wrote:
> > On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote:
> >> I also seem to remember that [extras] was less than user-friendly for
> >> some reason, but maybe that was just because of how our CI jobs are
> >> setup? Or I'm just making that up. I know it's pretty simple to install
> >> the stuff from extras for tox runs, it's just an extra set of
> >> dependencies to list in the tox.ini.
> > 
> > One concern I have as a user is that extras are not very discoverable 
> > without reading the source setup.cfg file. This can be addressed by 
> > improving installation docs to explain what the extras options are and why 
> > you might want to use them.
> 
> Yeah - they're kind of an advanced feature that most python people don't 
> seem to know exists at all.
> 
> I'm honestly worried about us expanding our use of them and would prefer 
> we got rid of our usage. I don't think the upcoming Pipfile stuff 
> supports them at all - and I believe that's on purpose.

Pipfile is being created as a replacement for requirements.txt but not
in the way that we use the file. So it is possible to express via a
Pipfile that something needs to install extras (see the example in
https://github.com/pypa/pipfile) but it is not possible to express those
extras there because that's not what that file is meant to be used for
(as I think you've pointed out in the thread about pbr/pipfile
integration).

> 
> > Another idea was to add a 'all' extras that installed all of the more fine 
> > grained extra's options. That way a user can just say give me all the 
> > features I don't care even if I can't use them all I know the ones I can 
> > use will be properly installed.
> > 
> > As for the CI jobs its just a matter of listing the extras in the 
> > appropriate requirements files or explicitly installing them.
> 
> How about instead of extras we just make some additional packages? Like, 
> for instance make a "nova-zvm-support" repo that contains the extra 
> requirements in it and that we publish to PyPI. Then a user could do 
> "pip install nova nova-zvm-support" instead of "pip install nova[zvm]".

So the driver would still live in the nova tree, but the dependencies
for it would be expressed by a package that is built elsewhere? It
feels like that's likely to require some extra care for ordering
changes when a dependency has to be updated.

> That way we can avoid installing optional things for the common case 
> when they're not going to be used (including in the gate where we have 
> no Z machines) but still provide a mechanism for users to easily install 
> the software they need. It would also let a 3rd-party ci that DOES have 
> some Z to test against to set up a zuul job that puts nova-zvm-support 
> into its required-projects and test the combination of the two.

All of that is technically true. I'm not sure how a separate package is
more discoverable than using extras, though.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config drive format

2018-04-12 Thread melanie witt

On Thu, 12 Apr 2018 09:31:45 +1000, Michael Still wrote:
The more I think about it, the more I dislike how the proposed driver 
also "lies" about it using iso9660. That's definitely wrong:


         if CONF.config_drive_format in ['iso9660']:
             # cloud-init only support iso9660 and vfat, but in z/VM
             # implementation, can't link a disk to VM as iso9660 before 
it's

             # boot ,so create a tgz file then send to the VM deployed, and
             # during startup process, the tgz file will be extracted and
             # mounted as iso9660 format then cloud-init is able to 
consume it

             self._make_tgz(path)
         else:
             raise exception.ConfigDriveUnknownFormat(
                 format=CONF.config_drive_format)


I've asked for more information on the review about how this works -- is 
it the z/VM library that extracts the tarball and mounts it as iso9660 
before the guest boots or is it expected that the guest is running some 
special software that will do that before cloud-init runs, or what?


I also noticed that the z/VM CI has disabled ssh validation of guests by 
setting '[validation]run_validation=False' in tempest.conf [0]. This 
means we're unable to see the running guest successfully consume the 
config drive using this approach. This is the tempest test that verifies 
functionality when run_validation=True [1].


I think we need to understand more about how this config drive approach 
is supposed to work and be able to see running instances successfully 
start up using it in the CI runs.


-melanie

[0] 
http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-16244/logs/tempest_conf
[1] 
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Monty Taylor

On 04/12/2018 11:27 AM, Clark Boylan wrote:

On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote:

I also seem to remember that [extras] was less than user-friendly for
some reason, but maybe that was just because of how our CI jobs are
setup? Or I'm just making that up. I know it's pretty simple to install
the stuff from extras for tox runs, it's just an extra set of
dependencies to list in the tox.ini.


One concern I have as a user is that extras are not very discoverable without 
reading the source setup.cfg file. This can be addressed by improving 
installation docs to explain what the extras options are and why you might want 
to use them.


Yeah - they're kind of an advanced feature that most python people don't 
seem to know exists at all.


I'm honestly worried about us expanding our use of them and would prefer 
we got rid of our usage. I don't think the upcoming Pipfile stuff 
supports them at all - and I believe that's on purpose.



Another idea was to add a 'all' extras that installed all of the more fine 
grained extra's options. That way a user can just say give me all the features 
I don't care even if I can't use them all I know the ones I can use will be 
properly installed.

As for the CI jobs its just a matter of listing the extras in the appropriate 
requirements files or explicitly installing them.


How about instead of extras we just make some additional packages? Like, 
for instance make a "nova-zvm-support" repo that contains the extra 
requirements in it and that we publish to PyPI. Then a user could do 
"pip install nova nova-zvm-support" instead of "pip install nova[zvm]".


That way we can avoid installing optional things for the common case 
when they're not going to be used (including in the gate where we have 
no Z machines) but still provide a mechanism for users to easily install 
the software they need. It would also let a 3rd-party ci that DOES have 
some Z to test against to set up a zuul job that puts nova-zvm-support 
into its required-projects and test the combination of the two.


We could do a similar thing for the extras in keystoneauth. Make a 
keystoneauth-kerberos and a keystoneauth-saml2 and a keystoneauth-oauth1.


Just a thought...

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-19, April 16-20

2018-04-12 Thread Sean McGinnis
Welcome to our regular release countdown email.

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

The first Rocky milestone is this coming Thursday, the 19th. While there aren't
any OpenStack-wide deadlines for this milestone, individual projects do have
some time critical requirements. Please be aware of any project specific
deadlines that may impact you.


General Information
---

PTLs and release liaisons of cycle-with-milestones projects, no later than
Thursday you will need to prepare a release request for your project(s) using
deliverables/rocky/$project.yaml in openstack/releases. The initial release
number should be $MAJOR.0.0.0b1, where $MAJOR is incremented from the Queens
version. Please ask in the #openstack-release channel if you have any questions
about this.

Reminder to pay attention to the work being done in support of the Rocky cycle
goals [1].

[1] https://governance.openstack.org/tc/goals/rocky/index.html

The TC elections start on the April 23rd. The nomination period is open until
the 17th, so if you have any interest, please consider putting your name in for
the election.

There will be a week of "campaigning" after the nomination period is over and
before voting begins. Please participate in any discussions on the
openstack-dev mailing list to give everyone a chance to learn more about the
candidates and their opinions.

You can check out the candidates for this election and get details on the
election page [2].

[2] https://governance.openstack.org/election/

Even if you don't have a strong opinion on candidates or their plans with the
TC, please consider voting for your preferred candidates. We need the
participation of all of the OpenStack community. Your vote helps and does make
a difference.


Upcoming Deadlines & Dates
--

TC Nomination Deadline: April 17
TC Campaigning: April 17-22
TC Election: April 23-30
Rocky-1 milestone: April 19 (R-19 week)
Forum at OpenStack Summit in Vancouver: May 21-24

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-04-12 Thread Emilien Macchi
On Tue, Mar 20, 2018 at 10:28 AM, Javier Pena  wrote:
> One point we should add here: to test Python 3 we need some base
operating system to work on. For now, our plan is to create a set of
stabilized Fedora 28 repositories and use them only for CI jobs. See [1]
for details on this plan.

Javier, Alfredo, where are we regarding this topic? Have we made some
progress on f28 repos? I'm interested to know about the next steps, I
really want us to make some progress on python3 testing here.

Thanks,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Matt Riedemann

On 4/12/2018 7:42 AM, Eric Fried wrote:

This sounds reasonable to me.  I'm glad the issue was raised, but IMO it
shouldn't derail progress on an approved blueprint with ready code.

Jichen, would you please go ahead and file that blueprint template (no
need to write a spec yet) and link it in a review comment on the bottom
zvm patch so we have a paper trail?  I'm thinking something like
"Consistent platform-specific and optional requirements" -- that leaves
us open to decide*how*  we're going to "handle" them.


FWIW I'm also OK with deferring debate on this and not blocking the zvm 
stuff for this specific issue, because we can really go down a rabbit 
hole if we want to be pedantic on this, for example, os-brick is only 
used by a couple of virt drivers, taskflow is only used by powervm, 
castellan is optional since we don't require a real key manager, etc.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-04-12 Thread Ed Leafe
Greetings OpenStack community,

It was a fairly quick meeting today, as we weren't able to find anything to 
argue about. That doesn't happen too often. :)

We agreed that the revamped HTTP guidelines [8] should be merged, as they were 
strictly formatting changes, and no content change. We also merged the change 
to update the errors guidance [9] to use service-type instead of service-name, 
as that had been frozen last week, with no negative feedback since then.

We still have not gotten a lot of feedback from the SDK community about topics 
to discuss at the upcoming Vancouver Forum. If you are involved with SDK 
development and have something you'd like to discuss there, please reply to the 
openstack-dev mailing list thread [7] with your thoughts.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Update the errors guidance to use service-type for code
  https://review.openstack.org/#/c/554921/

* Break up the HTTP guideline into smaller documents
  https://review.openstack.org/#/c/554234/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000353.html
[8] https://review.openstack.org/#/c/554234/
[9] https://review.openstack.org/#/c/554921/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Clark Boylan
On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote:
> I also seem to remember that [extras] was less than user-friendly for 
> some reason, but maybe that was just because of how our CI jobs are 
> setup? Or I'm just making that up. I know it's pretty simple to install 
> the stuff from extras for tox runs, it's just an extra set of 
> dependencies to list in the tox.ini.

One concern I have as a user is that extras are not very discoverable without 
reading the source setup.cfg file. This can be addressed by improving 
installation docs to explain what the extras options are and why you might want 
to use them.

Another idea was to add a 'all' extras that installed all of the more fine 
grained extra's options. That way a user can just say give me all the features 
I don't care even if I can't use them all I know the ones I can use will be 
properly installed.

As for the CI jobs its just a matter of listing the extras in the appropriate 
requirements files or explicitly installing them.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Emilien Macchi
On Thu, Apr 12, 2018 at 1:16 AM, Bogdan Dobrelya 
wrote:
[...]

> Deploy own registry as part of UC deployment or use external. For
>> instance, for production use I would like to have a cluster of 3-5
>> registries with HAProxy in front to speed up 1k nodes deployments.
>>
>
> Note that this implies HA undercloud as well. Although, given that HA
> undercloud is goodness indeed, I would *not* invest time into reliable
> container registry deployment architecture for undercloud as we'll have it
> for free, once kubernetes/openshift control plane for openstack becomes
> adopted. There is a very strong notion of build pipelines, reliable
> containers registries et al.


Right. HA undercloud is out of context now.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Emilien Macchi
On Thu, Apr 12, 2018 at 1:08 AM, Sergii Golovatiuk 
wrote:
[...]

> > One of the great outcomes of this blueprint is that in Rocky, the
> operator
> > won't have to run all the "openstack overcloud container" commands to
> > prepare the container registry and upload the containers. Indeed, it'll
> be
> > driven by Heat and Mistral mostly.
>
> I am trying to think as operator and it's very similar to 'openstack
> container' which is swift. So it might be confusing I guess.


"openstack overcloud container" was already in Pike, Queens for your
information.

[...]

> > - We need a tool to update containers from this registry and *before*
> > deploying them. We already have this tool in place in our CI for the
> > overcloud (see [3] and [4]). Now we need a similar thing for the
> undercloud.
>
> I would use external registry in this case. Quay.io might be a good
> choice to have rock solid simplicity. It might not be good for CI as
> requires very strong connectivity but it should be sufficient for
> developers.
>

No. We'll use docker-distribution for now, and will consider more support
in the future but what we want right now is parity.

[...]
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Ben Nemec



On 04/12/2018 07:42 AM, Eric Fried wrote:

+1

This sounds reasonable to me.  I'm glad the issue was raised, but IMO it
shouldn't derail progress on an approved blueprint with ready code.


The one thing I will note, because we're dealing with it in 
oslo.messaging right now, is that there's no clear path to removing a 
requirement from the unconditional list and moving it to extras.  There 
isn't really a deprecation method for requirements where we can notify 
users that they'll need to start installing things with [zvm] or 
whatever added as extras.


Our current approach in oslo.messaging is to leave the existing 
requirements and add new ones as extras.  It's not perfect (someone 
using kafka doesn't need the rabbit deps, but they'll still get them), 
but it's a step in the right direction.




Jichen, would you please go ahead and file that blueprint template (no
need to write a spec yet) and link it in a review comment on the bottom
zvm patch so we have a paper trail?  I'm thinking something like
"Consistent platform-specific and optional requirements" -- that leaves
us open to decide *how* we're going to "handle" them.

Thanks,
efried

On 04/12/2018 04:13 AM, Chen CH Ji wrote:

Thanks for Michael for raising this question and detailed information
from Clark

As indicated in the mail, xen, vmware etc might already have this kind
of requirements (and I guess might be more than that) ,
can we accept z/VM requirements first by following other existing ones
then next I can create a BP later to indicate this kind
of change request by referring to Clark's comments and submit patches to
handle it ? Thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC

Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018
08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: >

From: Matt Riedemann 
To: openstack-dev@lists.openstack.org
Date: 04/12/2018 08:46 AM
Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform
specific, dependancies in requirements.txt





On 4/11/2018 5:09 PM, Michael Still wrote:




https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4=
 proposes
adding a z/VM specific

dependancy to nova's requirements.txt. When I objected the counter
argument is that we have examples of windows specific dependancies
(os-win) and powervm specific dependancies in that file already.

I think perhaps all three are a mistake and should be removed.

My recollection is that for drivers like ironic which may not be
deployed by everyone, we have the dependancy documented, and then loaded
at runtime by the driver itself instead of adding it to
requirements.txt. This is to stop pip for auto-installing the dependancy
for anyone who wants to run nova. I had assumed this was at the request
of the deployer community.

So what do we do with z/VM? Do we clean this up? Or do we now allow
dependancies that are only useful to a very small number of deployments
into requirements.txt?


As Eric pointed out in the review, this came up when pypowervm was added:

https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg=

And you're asking the same questions I did in there, which was, should
it go into test-requirements.txt like oslo.vmware and
python-ironicclient, or should it go under [extras], or go into
requirements.txt like os-win (we also have the xenapi library now too).

I don't really think all of these optional packages should be in
requirements.txt, but we should just be consistent with whatever we do,
be that test-requirements.txt or [extras]. I remember caring more about
this back in my rpm packaging days when we actually tracked what was in
requirements.txt to base what needed to go into the rpm spec, unlike
Fedora rpm specs which just zero out requirements.txt and depend on
their own knowledge of what needs to be installed (which is sometimes
lacking or lagging master).

I also seem to remember that [extras] was less than user-friendly for
some reason, but maybe that was just because of how our CI jobs are
setup? Or I'm just making that up. I know it's pretty simple to install
the stuff from extras for tox runs, it's just an extra set of
dependencies to list in the tox.ini.

Having said all this, I don't have the energy to help 

[openstack-dev] Fwd: Summary of PyPI overhaul in new LWN article

2018-04-12 Thread Doug Hellmann
I thought some folks from our community would be interested in the
ongoing work on the Python Package Index (PyPI).  The article
mentioned in this post to the distutils mailing list provides a
good history and a description of the new and planned features for
"Warehouse".

Doug

--- Begin forwarded message from Sumana Harihareswara ---
From: Sumana Harihareswara 
To: pypa-dev , DistUtils mailing list 

Date: Wed, 11 Apr 2018 22:30:49 -0400
Subject: [Distutils] Summary of PyPI overhaul in new LWN article

Today, LWN published my new article "A new package index for Python".
https://lwn.net/Articles/751458/ In it, I discuss security, policy, UX
and developer experience changes in the 15+ years since PyPI's founding,
new features (and deprecated old features) in Warehouse, and future
plans. Plus: screenshots!

If you aren't already an LWN subscriber, you can use this subscriber
link for the next week to read the article despite the LWN paywall.
https://lwn.net/SubscriberLink/751458/81b2759e7025d6b9/

This summary should help occasional Python programmers -- and frequent
Pythonists who don't follow packaging/distro discussions closely --
understand why a new application is necessary, what's new, what features
are going away, and what to expect in the near future. I also hope it
catches the attention of downstreams that ought to migrate.

--- End forwarded message ---

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Forum Submissions Reminder + Vancouver Info

2018-04-12 Thread Jimmy McArthur
Hello!  A quick reminder that the Vancouver Forum Submission deadline is 
this coming Sunday, April 15th.


Submission Process
Please proceed to http://forumtopics.openstack.org/ to submit your topics.

What is the Forum?
If you'd like more details about the Forum, go to 
https://wiki.openstack.org/wiki/Forum


Where do I register for the Summit in Vancouver?
https://www.eventbrite.com/e/openstack-summit-may-2018-vancouver-tickets-40845826968?aff=YVRSummit2018

Now get a hotel room for up to 55% off the standard Vancouver rates
https://www.openstack.org/summit/vancouver-2018/travel/

Thanks and we look forward to seeing you all in Vancouver!

Cheers,
Jimmy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][charms] Openstack + OVN

2018-04-12 Thread Aakash Kt
Hello,

Any update on getting to the development of this charm? I need some
guidance on this.

Thank you,
Aakash

On Tue, Mar 27, 2018 at 10:27 PM, Aakash Kt  wrote:

> Hello,
>
> So an update about current status. The charm spec for charm-os-ovn has
> been merged (queens/backlog). I don't know what the process is after this,
> but I had a couple of questions for the development of the charm :
>
> - I was wondering whether I need to use the charms.openstack package? Or
> can I just write using the reactive framework as is?
> - If we do have to use charms.openstack, where can I find good
> documentation of the package? I searched online and could not find much to
> go on with.
> - How much time do you think this will take to develop (not including test
> cases) ?
>
> Do guide me on the further steps to bring this charm to completion :-)
>
> Thank you,
> Aakash
>
>
> On Mon, Mar 19, 2018 at 5:37 PM, Aakash Kt  wrote:
>
>> Hi James,
>>
>> Thank you for the previous code review.
>> I have pushed another patch. Also, I do not know how to reply to your
>> review comments on gerrit, so I will reply to them here.
>>
>> About the signed-off-message, I did not know that it wasn't a requirement
>> for OpenStack, I assumed it was. I have removed it from the updated patch.
>>
>> Thank you,
>> Aakash
>>
>>
>> On Thu, Mar 15, 2018 at 11:34 AM, Aakash Kt  wrote:
>>
>>> Hi James,
>>>
>>> Just a small reminder that I have pushed a patch for review, according
>>> to changes you suggested :-)
>>>
>>> Thanks,
>>> Aakash
>>>
>>> On Mon, Mar 12, 2018 at 2:38 PM, James Page 
>>> wrote:
>>>
 Hi Aakash

 On Sun, 11 Mar 2018 at 19:01 Aakash Kt  wrote:

> Hi,
>
> I had previously put in a mail about the development for openstack-ovn
> charm. Sorry it took me this long to get back, was involved in other
> projects.
>
> I have submitted a charm spec for the above charm.
> Here is the review link : https://review.openstack.org/#/c/551800/
>
> Please look in to it and we can further discuss how to proceed.
>

 I'll feedback directly on the review.

 Thanks!

 James

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Eric Fried
+1

This sounds reasonable to me.  I'm glad the issue was raised, but IMO it
shouldn't derail progress on an approved blueprint with ready code.

Jichen, would you please go ahead and file that blueprint template (no
need to write a spec yet) and link it in a review comment on the bottom
zvm patch so we have a paper trail?  I'm thinking something like
"Consistent platform-specific and optional requirements" -- that leaves
us open to decide *how* we're going to "handle" them.

Thanks,
efried

On 04/12/2018 04:13 AM, Chen CH Ji wrote:
> Thanks for Michael for raising this question and detailed information
> from Clark
> 
> As indicated in the mail, xen, vmware etc might already have this kind
> of requirements (and I guess might be more than that) ,
> can we accept z/VM requirements first by following other existing ones
> then next I can create a BP later to indicate this kind
> of change request by referring to Clark's comments and submit patches to
> handle it ? Thanks
> 
> Best Regards!
> 
> Kevin (Chen) Ji 纪 晨
> 
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> District, Beijing 100193, PRC
> 
> Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
> 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018
> 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: >
> 
> From: Matt Riedemann 
> To: openstack-dev@lists.openstack.org
> Date: 04/12/2018 08:46 AM
> Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform
> specific, dependancies in requirements.txt
> 
> 
> 
> 
> 
> On 4/11/2018 5:09 PM, Michael Still wrote:
>>
>>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4=
>  proposes
> adding a z/VM specific
>> dependancy to nova's requirements.txt. When I objected the counter
>> argument is that we have examples of windows specific dependancies
>> (os-win) and powervm specific dependancies in that file already.
>>
>> I think perhaps all three are a mistake and should be removed.
>>
>> My recollection is that for drivers like ironic which may not be
>> deployed by everyone, we have the dependancy documented, and then loaded
>> at runtime by the driver itself instead of adding it to
>> requirements.txt. This is to stop pip for auto-installing the dependancy
>> for anyone who wants to run nova. I had assumed this was at the request
>> of the deployer community.
>>
>> So what do we do with z/VM? Do we clean this up? Or do we now allow
>> dependancies that are only useful to a very small number of deployments
>> into requirements.txt?
> 
> As Eric pointed out in the review, this came up when pypowervm was added:
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg=
> 
> And you're asking the same questions I did in there, which was, should
> it go into test-requirements.txt like oslo.vmware and
> python-ironicclient, or should it go under [extras], or go into
> requirements.txt like os-win (we also have the xenapi library now too).
> 
> I don't really think all of these optional packages should be in
> requirements.txt, but we should just be consistent with whatever we do,
> be that test-requirements.txt or [extras]. I remember caring more about
> this back in my rpm packaging days when we actually tracked what was in
> requirements.txt to base what needed to go into the rpm spec, unlike
> Fedora rpm specs which just zero out requirements.txt and depend on
> their own knowledge of what needs to be installed (which is sometimes
> lacking or lagging master).
> 
> I also seem to remember that [extras] was less than user-friendly for
> some reason, but maybe that was just because of how our CI jobs are
> setup? Or I'm just making that up. I know it's pretty simple to install
> the stuff from extras for tox runs, it's just an extra set of
> dependencies to list in the tox.ini.
> 
> Having said all this, I don't have the energy to help push for
> consistency myself, but will happily watch you from the sidelines.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC)

2018-04-12 Thread Ruby Loo
Hi Mike,

This works for me. We can refine/discuss at the bug squashing event.

Thanks!
--ruby

On Wed, Apr 11, 2018 at 2:53 PM, Michael Turek 
wrote:

> Sorry this is so late but as for the format of the event I think we should
> do something like this:
>
> 1) Go through new bugs
> -This is doable in storyboard. Sort by creation date
> -Should be a nice warm up activity!
> 2) Go through oldest bugs
> -Again, doable in storyboard. Sort by last updated.
> -Older bugs are usually candidates for some clean up. We'll decide if
> bugs are still valid
>  or if we need to reassign/poke owners.
> 3) Open Floor
> -If you have a bug that you'd like to discuss, bring it up here!
> 4) Storyboard discussion
> -One of the reasons we are doing this is to get our feet wet in
> storyboard. Let's spend
>  10 to 20 minutes discussing what we need out of the tool after
> playing with it.
>
> Originally I was hoping that we could sort by task priority but that
> currently seems to be
> unavailable, or well hidden, in storyboard . If someone knows how to do
> this, please reply.
>
> Does anyone else have any ideas on how to structure bug day?
>
> Thanks!
> Mike 
>
>
> On 4/11/18 9:47 AM, Michael Turek wrote:
>
>> Hey all,
>>
>> Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM
>> (UTC)
>>
>> We will be meeting on Julia's bluejeans line:
>> https://bluejeans.com/5548595878
>>
>> Hope to see everyone there!
>>
>> Thanks,
>> Mike Turek 
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] service/package dependencies

2018-04-12 Thread Sergey Glazyrin
Hello guys.

Is there a way to automatically find out the dependencies (build tree of
dependencies) of openstack services: for example, ceilometer depends on
rabbitmq, etc.

We are developing a troubleshooting system for openstack and we want to let
the user know when some service/package dependency broken that this
service/package at risk.

We may hardcode such dependencies but I prefer to have some automatic
solution.


-- 
Best, Sergey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] Dependency management

2018-04-12 Thread Julie Pichon
On 2 March 2018 at 22:52, Alan Pevec  wrote:
> On Mon, Jan 22, 2018 at 9:30 AM, Julie Pichon  wrote:
>> On 19 January 2018 at 18:43, Honza Pokorny  wrote:
>>> We've recently discovered an issue with the way we handle dependencies for
>>> tripleo-ui.  This is an explanation of the problem, and a proposed solution.
>>> I'm looking for feedback.
>>>
>>> Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for
>>> tripleo-ui:
>>>
>>> * Native npm jobs
>>> * Undercloud integration jobs
>>>
>>> After the upgrade, the integration jobs went away.  Our goal is to add them
>>> back.
>>>
>>> There is a difference in how these two types of jobs handle dependencies.
>>> Native npm jobs use the "npm install" command to acquire packages, and
>>> undercloud jobs use RPMs.  The tripleo-ui project uses a separate RPM for
>>> dependencies called openstack-tripleo-ui-deps.
>>>
>>> Because of the requirement to use a separate RPM for dependencies, there is 
>>> some
>>> extra work needed when a new dependency is introduced, or an existing one is
>>> upgraded.  Once the patch that introduces the dependency is merged, we have 
>>> to
>>> increment the version of the -deps package, and rebuild it.  It then shows 
>>> up in
>>> the yum repos used by the undercloud.
>>>
>>> To make matters worse, we recently upgraded our infrastructure to nodejs 
>>> 8.9.4
>>> and npm 5.6.0 (latest stable).  This makes it so we can't write "purist" 
>>> patches
>>> that simply introduce a new dependency to package.json, and nothing more.  
>>> The
>>> code that uses the new dependency must be included.  I tend to think that 
>>> each
>>> commit should work on its own so this can be seen as a plus.
>>>
>>> This presents a problem: you can't get a patch that introduces a new 
>>> dependency
>>> merged because it's not included in the RPM needed by the undercloud ci job.
>>>
>>> So, here is a proposal on how that might work:
>>>
>>> 1. Submit a patch for review that introduces the dependency, along with code
>>>changes to support it and validate its inclusion
>>> 2. Native npm jobs will pass
>>> 3. Undercloud gate job will fail because the dependency isn't in -deps RPM
>>> 4. We ask RDO to review for licensing
>>> 5. Once reviewed, new -deps package is built
>>> 6. Recheck
>>> 7. All jobs pass
>>
>> Perhaps there should be a step after 3 or 4 to have the patch normally
>> reviewed, and wait for it to have two +2s before building the new
>> package? Otherwise we may end up with wasted work to get a new package
>> ready for dependencies that were eventually dismissed.
>
> Thanks Julie for reminding me of  this thread!
>
> I agree we can only build ui-deps package when the review is about to merge.
> I've proposed https://github.com/rdo-common/openstack-tripleo-ui-deps/pull/19
> which allows us to build the package with the review and patchset
> numbers, before it's merged.
> Please review and we can try it on the next deps update!

Thanks Alan! Let's do that :)

Glad to see the pull request merged. If we're happy with the new
suggested process here, I proposed a patch to update the docs with it
at [1]. Hopefully we can move ahead with this and also merge the patch
to reenable the undercloud job [2] to get back minimal sanity checking
on the UI rpms.

Thanks!

Julie

[1] https://review.openstack.org/#/c/560846/
[2] https://review.openstack.org/#/c/526430/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Chen CH Ji
Thanks for Michael for raising this question and detailed information from
Clark

As indicated in the mail, xen, vmware etc might already have this kind of
requirements (and I guess might be more than that) ,
can we accept z/VM requirements first by following other existing ones then
next I can create a BP later to indicate this kind
of change request by referring to Clark's comments and submit patches to
handle it  ? Thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann 
To: openstack-dev@lists.openstack.org
Date:   04/12/2018 08:46 AM
Subject:Re: [openstack-dev] [Nova][Deployers] Optional, platform
specific, dependancies in requirements.txt



On 4/11/2018 5:09 PM, Michael Still wrote:
>
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4=
 proposes adding a z/VM specific
> dependancy to nova's requirements.txt. When I objected the counter
> argument is that we have examples of windows specific dependancies
> (os-win) and powervm specific dependancies in that file already.
>
> I think perhaps all three are a mistake and should be removed.
>
> My recollection is that for drivers like ironic which may not be
> deployed by everyone, we have the dependancy documented, and then loaded
> at runtime by the driver itself instead of adding it to
> requirements.txt. This is to stop pip for auto-installing the dependancy
> for anyone who wants to run nova. I had assumed this was at the request
> of the deployer community.
>
> So what do we do with z/VM? Do we clean this up? Or do we now allow
> dependancies that are only useful to a very small number of deployments
> into requirements.txt?

As Eric pointed out in the review, this came up when pypowervm was added:

https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg=


And you're asking the same questions I did in there, which was, should
it go into test-requirements.txt like oslo.vmware and
python-ironicclient, or should it go under [extras], or go into
requirements.txt like os-win (we also have the xenapi library now too).

I don't really think all of these optional packages should be in
requirements.txt, but we should just be consistent with whatever we do,
be that test-requirements.txt or [extras]. I remember caring more about
this back in my rpm packaging days when we actually tracked what was in
requirements.txt to base what needed to go into the rpm spec, unlike
Fedora rpm specs which just zero out requirements.txt and depend on
their own knowledge of what needs to be installed (which is sometimes
lacking or lagging master).

I also seem to remember that [extras] was less than user-friendly for
some reason, but maybe that was just because of how our CI jobs are
setup? Or I'm just making that up. I know it's pretty simple to install
the stuff from extras for tox runs, it's just an extra set of
dependencies to list in the tox.ini.

Having said all this, I don't have the energy to help push for
consistency myself, but will happily watch you from the sidelines.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Bogdan Dobrelya

On 4/12/18 12:38 AM, Steve Baker wrote:



On 11/04/18 12:50, Emilien Macchi wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being 
done around containers workflow in Rocky cycle.


If you're not familiar with the topic, I suggest to first read the 
blueprint to understand the context here:

https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the 
operator won't have to run all the "openstack overcloud container" 
commands to prepare the container registry and upload the containers. 
Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going to 
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with 
customized containers (more or less related to the all-in-one 
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and 
need my patch to be tested when the undercloud is containerized (see 
[2] for an excellent example).


I'm fairly sure the only use cases for this will be developer or CI 
based. I think we need to be strongly encouraging image modifications 
for production deployments to go through some kind of image building 
pipeline. See Next Steps below for the implications of this.



Both cases would require additional things:
- The container registry needs to be deployed *before* actually 
installing the undercloud.
- We need a tool to update containers from this registry and *before* 
deploying them. We already have this tool in place in our CI for the 
overcloud (see [3] and [4]). Now we need a similar thing for the 
undercloud.


One problem I see is that we use roles and environment files to filter 
the images to be pulled/modified/uploaded. Now we would need to assemble 
a list of undercloud *and* overcloud environments, and build some kind 
of aggregate role data for both. This would need to happen before the 
undercloud is even deployed, which is quite a different order from what 
quickstart does currently.


Either that or we do no image filtering and just process every image 
regardless of whether it will be used.




Next steps:
- Agree that we need to deploy the container-registry before the 
undercloud.
- If agreed, we'll create a new Ansible role called 
ansible-role-container-registry that for now will deploy exactly what 
we have in TripleO, without extra feature.

+1
- Drive the playbook runtime from tripleoclient to bootstrap the 
container registry (which of course could be disabled in undercloud.conf).
tripleoclient could switch to using this role instead of puppet-tripleo 
to install the registry, however since the only use-cases we have are 
dev/CI driven I wonder if quickstart/infrared can just invoke the role 
when required, before tripleoclient is involved.


- Create another Ansible role that would re-use container-check tool 
but the idea is to provide a role to modify containers when needed, 
and we could also control it from tripleoclient. The role would be 
using the ContainerImagePrepare parameter, which Steve is working on 
right now.


Since the use cases are all upstream CI/dev I do wonder if we should 
just have a dedicated container-check 
 role inside 
tripleo-quickstart-extras which can continue to use the script[3] or 
whatever. Keeping the logic in quickstart will remove the temptation to 
use it instead of a proper image build pipeline for production deployments.


+1 to put it in quickstart-extras to "hide" it from the production use 
cases.




Alternatively it could still be a standalone role which quickstart 
invokes, just to accommodate development workflows which don't use 
quickstart.



Feedback is welcome, thanks.

[1] All-In-One thread: 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed 
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed: 
https://github.com/imain/container-check
[4] Container-check running in TripleO CI: 
https://review.openstack.org/#/c/558885/ and 
https://review.openstack.org/#/c/529399/

--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack 

Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Bogdan Dobrelya

On 4/12/18 12:38 AM, Steve Baker wrote:



On 11/04/18 12:50, Emilien Macchi wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being 
done around containers workflow in Rocky cycle.


If you're not familiar with the topic, I suggest to first read the 
blueprint to understand the context here:

https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the 
operator won't have to run all the "openstack overcloud container" 
commands to prepare the container registry and upload the containers. 
Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going to 
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with 
customized containers (more or less related to the all-in-one 
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and 
need my patch to be tested when the undercloud is containerized (see 
[2] for an excellent example).


I'm fairly sure the only use cases for this will be developer or CI 
based. I think we need to be strongly encouraging image modifications 
for production deployments to go through some kind of image building 
pipeline. See Next Steps below for the implications of this.



Both cases would require additional things:
- The container registry needs to be deployed *before* actually 
installing the undercloud.
- We need a tool to update containers from this registry and *before* 
deploying them. We already have this tool in place in our CI for the 
overcloud (see [3] and [4]). Now we need a similar thing for the 
undercloud.


One problem I see is that we use roles and environment files to filter 
the images to be pulled/modified/uploaded. Now we would need to assemble 
a list of undercloud *and* overcloud environments, and build some kind 
of aggregate role data for both. This would need to happen before the 
undercloud is even deployed, which is quite a different order from what 
quickstart does currently.


Either that or we do no image filtering and just process every image 
regardless of whether it will be used.




Next steps:
- Agree that we need to deploy the container-registry before the 
undercloud.
- If agreed, we'll create a new Ansible role called 
ansible-role-container-registry that for now will deploy exactly what 
we have in TripleO, without extra feature.

+1
- Drive the playbook runtime from tripleoclient to bootstrap the 
container registry (which of course could be disabled in undercloud.conf).
tripleoclient could switch to using this role instead of puppet-tripleo 
to install the registry, however since the only use-cases we have are 
dev/CI driven I wonder if quickstart/infrared can just invoke the role 
when required, before tripleoclient is involved.


Please let's do that for tripleoclient and only make quickstart and 
other tools to invoke commands. We should keep being close to what users 
would do, which is only issuing client commands.




- Create another Ansible role that would re-use container-check tool 
but the idea is to provide a role to modify containers when needed, 
and we could also control it from tripleoclient. The role would be 
using the ContainerImagePrepare parameter, which Steve is working on 
right now.


Since the use cases are all upstream CI/dev I do wonder if we should 
just have a dedicated container-check 
 role inside 
tripleo-quickstart-extras which can continue to use the script[3] or 
whatever. Keeping the logic in quickstart will remove the temptation to 
use it instead of a proper image build pipeline for production deployments.


Alternatively it could still be a standalone role which quickstart 
invokes, just to accommodate development workflows which don't use 
quickstart.



Feedback is welcome, thanks.

[1] All-In-One thread: 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed 
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed: 
https://github.com/imain/container-check
[4] Container-check running in TripleO CI: 
https://review.openstack.org/#/c/558885/ and 
https://review.openstack.org/#/c/529399/

--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan 

Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Bogdan Dobrelya

On 4/12/18 10:08 AM, Sergii Golovatiuk wrote:

Hi,

Thank you very much for brining up this topic.

On Wed, Apr 11, 2018 at 2:50 AM, Emilien Macchi  wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being done
around containers workflow in Rocky cycle.

If you're not familiar with the topic, I suggest to first read the blueprint
to understand the context here:
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the operator
won't have to run all the "openstack overcloud container" commands to
prepare the container registry and upload the containers. Indeed, it'll be
driven by Heat and Mistral mostly.


I am trying to think as operator and it's very similar to 'openstack
container' which is swift. So it might be confusing I guess.



But today our discussion extended on 2 uses-cases that we're going to
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with
customized containers (more or less related to the all-in-one discussions on
another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and need
my patch to be tested when the undercloud is containerized (see [2] for an
excellent example).


That's very nice initiative.


Both cases would require additional things:
- The container registry needs to be deployed *before* actually installing
the undercloud.
- We need a tool to update containers from this registry and *before*
deploying them. We already have this tool in place in our CI for the
overcloud (see [3] and [4]). Now we need a similar thing for the undercloud.


I would use external registry in this case. Quay.io might be a good
choice to have rock solid simplicity. It might not be good for CI as
requires very strong connectivity but it should be sufficient for
developers.


Next steps:
- Agree that we need to deploy the container-registry before the undercloud.
- If agreed, we'll create a new Ansible role called
ansible-role-container-registry that for now will deploy exactly what we
have in TripleO, without extra feature.


Deploy own registry as part of UC deployment or use external. For
instance, for production use I would like to have a cluster of 3-5
registries with HAProxy in front to speed up 1k nodes deployments.


Note that this implies HA undercloud as well. Although, given that HA 
undercloud is goodness indeed, I would *not* invest time into reliable 
container registry deployment architecture for undercloud as we'll have 
it for free, once kubernetes/openshift control plane for openstack 
becomes adopted. There is a very strong notion of build pipelines, 
reliable containers registries et al.





- Drive the playbook runtime from tripleoclient to bootstrap the container
registry (which of course could be disabled in undercloud.conf).
- Create another Ansible role that would re-use container-check tool but the
idea is to provide a role to modify containers when needed, and we could
also control it from tripleoclient. The role would be using the
ContainerImagePrepare parameter, which Steve is working on right now.

Feedback is welcome, thanks.

[1] All-In-One thread:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed:
https://github.com/imain/container-check
[4] Container-check running in TripleO CI:
https://review.openstack.org/#/c/558885/ and
https://review.openstack.org/#/c/529399/
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Bogdan Dobrelya

On 4/12/18 12:38 AM, Steve Baker wrote:



On 11/04/18 12:50, Emilien Macchi wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being 
done around containers workflow in Rocky cycle.


If you're not familiar with the topic, I suggest to first read the 
blueprint to understand the context here:

https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the 
operator won't have to run all the "openstack overcloud container" 
commands to prepare the container registry and upload the containers. 
Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going to 
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with 
customized containers (more or less related to the all-in-one 
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and 
need my patch to be tested when the undercloud is containerized (see 
[2] for an excellent example).


I'm fairly sure the only use cases for this will be developer or CI 
based. I think we need to be strongly encouraging image modifications 
for production deployments to go through some kind of image building 
pipeline. See Next Steps below for the implications of this.


Yes, this. I would love to see container-check tool improving CI and dev
experience and would not happy to see it as a blessed part of the 
product architecture. Containers should be immutable and nothing should 
be mutated runtime, like updating packages et al.





Both cases would require additional things:
- The container registry needs to be deployed *before* actually 
installing the undercloud.
- We need a tool to update containers from this registry and *before* 
deploying them. We already have this tool in place in our CI for the 
overcloud (see [3] and [4]). Now we need a similar thing for the 
undercloud.


One problem I see is that we use roles and environment files to filter 
the images to be pulled/modified/uploaded. Now we would need to assemble 
a list of undercloud *and* overcloud environments, and build some kind 
of aggregate role data for both. This would need to happen before the 
undercloud is even deployed, which is quite a different order from what 
quickstart does currently.


Either that or we do no image filtering and just process every image 
regardless of whether it will be used.




Next steps:
- Agree that we need to deploy the container-registry before the 
undercloud.
- If agreed, we'll create a new Ansible role called 
ansible-role-container-registry that for now will deploy exactly what 
we have in TripleO, without extra feature.

+1
- Drive the playbook runtime from tripleoclient to bootstrap the 
container registry (which of course could be disabled in undercloud.conf).
tripleoclient could switch to using this role instead of puppet-tripleo 
to install the registry, however since the only use-cases we have are 
dev/CI driven I wonder if quickstart/infrared can just invoke the role 
when required, before tripleoclient is involved.


- Create another Ansible role that would re-use container-check tool 
but the idea is to provide a role to modify containers when needed, 
and we could also control it from tripleoclient. The role would be 
using the ContainerImagePrepare parameter, which Steve is working on 
right now.


Since the use cases are all upstream CI/dev I do wonder if we should 
just have a dedicated container-check 
 role inside 
tripleo-quickstart-extras which can continue to use the script[3] or 
whatever. Keeping the logic in quickstart will remove the temptation to 
use it instead of a proper image build pipeline for production deployments.


Alternatively it could still be a standalone role which quickstart 
invokes, just to accommodate development workflows which don't use 
quickstart.



Feedback is welcome, thanks.

[1] All-In-One thread: 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed 
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed: 
https://github.com/imain/container-check
[4] Container-check running in TripleO CI: 
https://review.openstack.org/#/c/558885/ and 
https://review.openstack.org/#/c/529399/

--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-12 Thread Sergii Golovatiuk
Hi,

Thank you very much for brining up this topic.

On Wed, Apr 11, 2018 at 2:50 AM, Emilien Macchi  wrote:
> Greetings,
>
> Steve Baker and I had a quick chat today about the work that is being done
> around containers workflow in Rocky cycle.
>
> If you're not familiar with the topic, I suggest to first read the blueprint
> to understand the context here:
> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
>
> One of the great outcomes of this blueprint is that in Rocky, the operator
> won't have to run all the "openstack overcloud container" commands to
> prepare the container registry and upload the containers. Indeed, it'll be
> driven by Heat and Mistral mostly.

I am trying to think as operator and it's very similar to 'openstack
container' which is swift. So it might be confusing I guess.

>
> But today our discussion extended on 2 uses-cases that we're going to
> explore and find how we can address them:
> 1) I'm a developer and want to deploy a containerized undercloud with
> customized containers (more or less related to the all-in-one discussions on
> another thread [1]).
> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and need
> my patch to be tested when the undercloud is containerized (see [2] for an
> excellent example).

That's very nice initiative.

> Both cases would require additional things:
> - The container registry needs to be deployed *before* actually installing
> the undercloud.
> - We need a tool to update containers from this registry and *before*
> deploying them. We already have this tool in place in our CI for the
> overcloud (see [3] and [4]). Now we need a similar thing for the undercloud.

I would use external registry in this case. Quay.io might be a good
choice to have rock solid simplicity. It might not be good for CI as
requires very strong connectivity but it should be sufficient for
developers.

> Next steps:
> - Agree that we need to deploy the container-registry before the undercloud.
> - If agreed, we'll create a new Ansible role called
> ansible-role-container-registry that for now will deploy exactly what we
> have in TripleO, without extra feature.

Deploy own registry as part of UC deployment or use external. For
instance, for production use I would like to have a cluster of 3-5
registries with HAProxy in front to speed up 1k nodes deployments.

> - Drive the playbook runtime from tripleoclient to bootstrap the container
> registry (which of course could be disabled in undercloud.conf).
> - Create another Ansible role that would re-use container-check tool but the
> idea is to provide a role to modify containers when needed, and we could
> also control it from tripleoclient. The role would be using the
> ContainerImagePrepare parameter, which Steve is working on right now.
>
> Feedback is welcome, thanks.
>
> [1] All-In-One thread:
> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
> [2] Bug report when undercloud is containeirzed
> https://bugs.launchpad.net/tripleo/+bug/1762422
> [3] Tool to update containers if needed:
> https://github.com/imain/container-check
> [4] Container-check running in TripleO CI:
> https://review.openstack.org/#/c/558885/ and
> https://review.openstack.org/#/c/529399/
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev