Re: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs

2018-06-14 Thread Ghanshyam



  On Fri, 15 Jun 2018 06:17:34 +0900 Doug Hellmann  
wrote  
 > Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400:
 > > Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900:
 > 
 > > >  > > > Could it be as simple as adding tempest-full-py3 with the
 > > >  > > > required-projects list updated to include the current repository? 
 > > > So
 > > >  > > > there isn't a special separate job, and we would just reuse
 > > >  > > > tempest-full-py3 for this?
 > > > 
 > > > This can work if lib-forward-testing is going to run against current lib 
 > > > repo only not cross lib or cross project. For example, if neutron want 
 > > > to tests neutron change against neutron-lib src  then this will not 
 > > > work. But from history [1] this does not seems to be scope of 
 > > > lib-forward-testing.
 > > > 
 > > > Even  we do not need to add current repo to required-projects list or in 
 > > > LIBS_FROM_GIT .  That will always from master + current patch changes. 
 > > > So this makes no change in tempest-full-py3 job and we can directly use  
 > > > tempest-full-py3 job in lib-forward-testing. Testing in [2].
 > > 
 > > Does it? So if I add tempest-full-py3 to a *library* that library is
 > > installed from source in the job? I know the source for the library
 > > will be checked out, but I'm surprised that devstack would be configured
 > > to use it. How does that work?
 > 
 > Based on my testing, that doesn't seem to be the case. I added it to
 > oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set
 > to include oslo.config and the check function is returning false so that
 > it is not installed from source [2].

Yes, It will not be set on LIBS_FROM_GIT as we did not set it explicitly. But 
gate running on any repo does run job on current change set of that repo which 
is nothing but  "master + current patch changes" . For example, any job running 
on oslo.config patch will take oslo.config source code from that patch which is 
"master + current change". You can see the results in this patch - 
https://review.openstack.org/#/c/575324/ . Where I deleted a module and gate 
jobs (including tempest-full-py3) fails as they run on current change set of 
neutron-lib code not on pypi version(which would pass the tests). 

In that case, lib's proposed change will be tested against integration tests  
job to check any regression. If we need to run cross lib/project testing of any 
lib then, yes we need the 'tempest-full-py3-src' job but that is separate 
things as you mentioned. 

-gmann

 > 
 > So, I think we need the tempest-full-py3-src job. I will propose an
 > update to the tempest repo to add that.
 > 
 > Doug
 > 
 > [1] 
 > http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz
 > [2] 
 > http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons

2018-06-14 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-06-14 16:28:14 -0500:
> > 
> > After giving everyone a week to volunteer as liaisons for project teams,
> > I have filled out the roster so that every team has 2 TC members
> > assigned. I used random.shuffle() and then went down the list and tried
> > to avoid assigning the same person twice while ensuring that everyone
> > had 10. Please check my results. :-)
> > 
> > We already have some reports from a few teams on the status page,
> > https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates
> > 
> > It would be good if we could complete a first pass for all teams between
> > now and the PTG and post the results to that wiki page.
> > 
> > Doug
> > 
> 
> What is the expectation with these reports Doug? We talked a little about 
> being
> a TC contact point for these teams or at least reaching out and checking in on
> the teams periodically, but this is the first I was aware of needing to write
> some sort of report about it.
> 
> I can certainly collect some notes, but since I was not aware of this part of
> it, I'm sure there are probably others that were not as well.
> 
> Sean
> 

Sorry, that was a poor choice of words on my part.  I don't expect
a long detailed write up. Just leave your notes in the wiki page,
like the other folks have been doing as they have started their
reviews.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons

2018-06-14 Thread Sean McGinnis
> 
> After giving everyone a week to volunteer as liaisons for project teams,
> I have filled out the roster so that every team has 2 TC members
> assigned. I used random.shuffle() and then went down the list and tried
> to avoid assigning the same person twice while ensuring that everyone
> had 10. Please check my results. :-)
> 
> We already have some reports from a few teams on the status page,
> https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates
> 
> It would be good if we could complete a first pass for all teams between
> now and the PTG and post the results to that wiki page.
> 
> Doug
> 

What is the expectation with these reports Doug? We talked a little about being
a TC contact point for these teams or at least reaching out and checking in on
the teams periodically, but this is the first I was aware of needing to write
some sort of report about it.

I can certainly collect some notes, but since I was not aware of this part of
it, I'm sure there are probably others that were not as well.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs

2018-06-14 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400:
> Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900:

> >  > > > Could it be as simple as adding tempest-full-py3 with the
> >  > > > required-projects list updated to include the current repository? So
> >  > > > there isn't a special separate job, and we would just reuse
> >  > > > tempest-full-py3 for this?
> > 
> > This can work if lib-forward-testing is going to run against current lib 
> > repo only not cross lib or cross project. For example, if neutron want to 
> > tests neutron change against neutron-lib src  then this will not work. But 
> > from history [1] this does not seems to be scope of lib-forward-testing.
> > 
> > Even  we do not need to add current repo to required-projects list or in 
> > LIBS_FROM_GIT .  That will always from master + current patch changes. So 
> > this makes no change in tempest-full-py3 job and we can directly use  
> > tempest-full-py3 job in lib-forward-testing. Testing in [2].
> 
> Does it? So if I add tempest-full-py3 to a *library* that library is
> installed from source in the job? I know the source for the library
> will be checked out, but I'm surprised that devstack would be configured
> to use it. How does that work?

Based on my testing, that doesn't seem to be the case. I added it to
oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set
to include oslo.config and the check function is returning false so that
it is not installed from source [2].

So, I think we need the tempest-full-py3-src job. I will propose an
update to the tempest repo to add that.

Doug

[1] 
http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz
[2] 
http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] NEW weekly meeting time

2018-06-14 Thread Douglas Mendizabal
+1

The new time slot would definitely make it much easier for me to attend
than the current one.

- Douglas Mendizábal

On Thu, 2018-06-14 at 16:30 -0400, Ade Lee wrote:
> The new time slot has been pretty difficult for folks to attend.
> I'd like to propose a new time slot, which will hopefully be more
> amenable to everyone.
> 
> Tuesday 12:00 UTC
> 
> https://www.timeanddate.com/worldclock/fixedtime.html?hour=12=00;
> se
> c=0
> 
> This works out to 8 am EST, around 1pm in Europe, and 8 pm in China.
> Please vote by responding to this email.
> 
> Thanks,
> Ade
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] NEW weekly meeting time

2018-06-14 Thread Ade Lee
The new time slot has been pretty difficult for folks to attend.
I'd like to propose a new time slot, which will hopefully be more
amenable to everyone.

Tuesday 12:00 UTC

https://www.timeanddate.com/worldclock/fixedtime.html?hour=12=00
c=0

This works out to 8 am EST, around 1pm in Europe, and 8 pm in China.
Please vote by responding to this email.

Thanks,
Ade

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements][docs]

2018-06-14 Thread Matthew Thode
Sphinx is being updated from 1.6.7 to 1.7.5.  You may need to update
your docs/templates to work with it.
-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons

2018-06-14 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-06-07 14:28:02 -0400:
> As we discussed in today's office hours, I have set up some space in the
> wiki for us to track which TC members are volunteering to act as liaison
> to the teams and other groups within the community to ensure they have
> the assistance and support they need from the TC.
> 
> https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons
> 
> For the first round, please sign up for groups you are interested in
> helping. We will work out some sort of assignment system for the rest so
> we have good coverage.
> 
> The list is quite long, so I don't expect everyone to be checking in
> with the groups weekly. But we do need to get a handle on where things
> stand now, and work out a way to keep up to date over time. My hope is
> that by dividing the work up, we won't *all* have to be tracking all of
> the groups and we won't let anyone slip through the cracks.
> 
> Doug

After giving everyone a week to volunteer as liaisons for project teams,
I have filled out the roster so that every team has 2 TC members
assigned. I used random.shuffle() and then went down the list and tried
to avoid assigning the same person twice while ensuring that everyone
had 10. Please check my results. :-)

We already have some reports from a few teams on the status page,
https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates

It would be good if we could complete a first pass for all teams between
now and the PTG and post the results to that wiki page.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs

2018-06-14 Thread Doug Hellmann
Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900:
> 
> 
> 
>   On Thu, 14 Jun 2018 05:55:55 +0900 Doug Hellmann 
>  wrote  
>  > Excerpts from Doug Hellmann's message of 2018-06-13 12:19:18 -0400:
>  > > Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400:
>  > > > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900:
>  > > > >   On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann 
>  wrote  
>  > > > >  > I would like to create a version of the jobs that run as part of
>  > > > >  > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works 
> under
>  > > > >  > python 3. I'm not sure the best way to proceed, since that's a 
> legacy
>  > > > >  > job.
>  > > > >  > 
>  > > > >  > I'm not sure I'm familiar enough with the job to port it to be
>  > > > >  > zuulv3 native and allow us to drop the "legacy". Should I just
>  > > > >  > duplicate that job and modify it and keep the new one as "legacy"
>  > > > >  > too?
>  > > > >  > 
>  > > > >  > Is there a different job I should base the work on? I don't see 
> anything
>  > > > >  > obvious in the tempest repo's .zuul.yaml file.
>  > > > > 
>  > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) 
> and it is similar to tempest-full-py3 job except it override the 
> LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with 
> tempest-full tests running and disable the swift services 
>  > > > > 
>  > > > > You can create a new job (something tempest-full-py3-src) derived 
> from 'tempest-full-py3' if all set var is ok for you like disable swift OR 
> derived  'devstack-tempest' and then build other var similar to 
> 'tempest-full-py3'.  Extra things you need to do is to add libs you want to 
> override in 'required_project' list (FYI- 
>  > > > > Now LIBS_FROM_GIT is automatically set based on required projects 
> [2]) .
>  > > > > 
>  > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated 
> separately if needed to run or removed. 
>  > > > > 
>  > > > > But I am not sure which repo  should own this new job.
>  > > > 
>  > > > Could it be as simple as adding tempest-full-py3 with the
>  > > > required-projects list updated to include the current repository? So
>  > > > there isn't a special separate job, and we would just reuse
>  > > > tempest-full-py3 for this?
> 
> This can work if lib-forward-testing is going to run against current lib repo 
> only not cross lib or cross project. For example, if neutron want to tests 
> neutron change against neutron-lib src  then this will not work. But from 
> history [1] this does not seems to be scope of lib-forward-testing.
> 
> Even  we do not need to add current repo to required-projects list or in 
> LIBS_FROM_GIT .  That will always from master + current patch changes. So 
> this makes no change in tempest-full-py3 job and we can directly use  
> tempest-full-py3 job in lib-forward-testing. Testing in [2].

Does it? So if I add tempest-full-py3 to a *library* that library is
installed from source in the job? I know the source for the library
will be checked out, but I'm surprised that devstack would be configured
to use it. How does that work?

> 
> And if anyone needed cross lib/project testing (like i mentioned above) then, 
> it will be very easy by defining a new job derived from tempest-full-py3 and 
> add required lib in required_projects list. 

Sure. Someone could do that, but it's not the problem I'm trying
to solve right now.

> 
>  > > > 
>  > > > It would be less "automatic" than the current project-template and job,
>  > > > but still relatively simple to set up. Am I missing something? This
>  > > > feels too easy...
>  > > 
>  > > I think I could define a job with a name like tempest-full-py3-src based
>  > > on tempest-full-py3 and set LIBS_FROM_GIT to include
>  > > {{zuul.project.name}} in the devstack_localrc vars section. If I
>  > > understand correctly, that would automatically set LIBS_FROM_GIT to
>  > > refer to the project that the job is attached to, which would make it
>  > > easier to use from a project-template (I would also create a
>  > > lib-forward-testing-py3 project template to supplement
>  > > lib-forward-testing).
>  > > 
>  > > Does that sound right?
>  > > 
>  > > Doug
>  > 
>  > This appears to be working.
>  > 
>  > https://review.openstack.org/575164 adds a job to oslo.config and the
>  > log shows LIBS_FROM_GIT set to oslo.config's repository:
>  > 
>  > 
> http://logs.openstack.org/64/575164/1/check/tempest-full-py3-src/7a193fa/job-output.txt.gz#_2018-06-13_19_01_22_742338
>  > 
>  > How does the QA team feel about hosting the job definition in the
>  > tempest repository with the tempest-full-py3 job? If you think that will
>  > work, I can propose the patch tomorrow.
>  > 
> 
> [1] https://review.openstack.org/#/c/125433
> [2] https://review.openstack.org/#/c/575324
> 
> -gmann
> 
>  > Doug
>  > 
>  > 

Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Sean McGinnis
On Thu, Jun 14, 2018 at 08:10:56AM -0300, Erlon Cruz wrote:
> Hi Thomas,
> 
> The reserved_percentage *is* taken in account for non thin provisoning
> backends. So you can use it to spare the space you need for backups. It is
> a per backend configuration.
> 
> If you have already tried to used it and that is not working, please let us
> know what release you are using, because despite this being the current
> (and proper) behavior, it might not being like this in the past.
> 
> Erlon
> 

Guess I didn't read far enough ahead. Thanks Erlon!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-06-14 Thread Ed Leafe
Greetings OpenStack community,

A small, intimate meeting today, as only cdent and edleafe were present. We 
discussed the work being done [7] to migrate our bug/issue tracking from 
Launchpad to StoryBoard [8]. The change to the infra docs will trigger the 
setup of StoryBoard when it merges. Once StoryBoard is up and running for the 
API-SIG, we will notify the GraphQL team, so that they can track their stories 
and tasks there.

There was also more progress on updating the name of this group. When we 
switched from the API-WG to the API-SIG a few months ago, there were several 
places where we could make the change without much fuss. But some other places, 
such as Gerrit, required intervention from the infra team. We thought it would 
be too much bother, so we didn't spend time on it. But it turns out that it's 
not that difficult for infra to do during Gerrit downtimes, so that change [9] 
was also submitted.

There being no recent changes to pending guidelines nor to bugs, we ended the 
meeting early.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://review.openstack.org/575120
[8] https://storyboard.openstack.org/#!/page/about
[9] https://review.openstack.org/575478

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Sean McGinnis
On Thu, Jun 14, 2018 at 11:13:22AM +0200, Thomas Goirand wrote:
> Hi,
> 
> When using cinder-backup, it first makes a snapshot, then sends the
> backup wherever it's configured. The issue is, to perform a backup, one
> needs to make a snapshot of a volume, meaning that one needs the size of
> the volume as empty space to be able to make the snapshot.
> 
> So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as
> empty space on the volume node so I can do a backup of that volume.
> 
> My question is: is there a way to tell cinder to reserve an amount of
> space for this kind of operation? The only thing I saw was
> reserved_percentage, but this looks like for thin provisioning only. If
> this doesn't exist, would such new option be accepted by the Cinder
> community, as a per volume node option? Or should we do it as a global
> setting?
> 

I don't believe we have this as a setting anywhere today.

It would be best as a per-backend (or backend_defaults) setting as some
backends can create volumes from snapshots without consuming any extra space,
while others like you point out with LVM needing to allocate a considerable
amount of space.

Maybe someone else can chime in if they are aware of another way this is
already being handled, but I have not had to deal with it, so I'm not aware of
anything.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review runways check-in and feedback

2018-06-14 Thread Dan Smith
> While I have tried to review a few of the runway-slotted efforts, I
> have gotten burned out on a number of them. Other runway-slotted
> efforts, I simply don't care enough about or once I've seen some of
> the code, simply can't bring myself to review it (sorry, just being
> honest).

I have the same feeling, although I have reviewed a lot of things I
wouldn't have otherwise as a result of them being in the runway. I spent
a bunch of time early on with the image signing stuff, which I think was
worthwhile, although at this point I'm a bit worn out on it. That's not
the fault of runways though.

> Is your concern that placement stuff is getting unfair attention since
> many of the patch series aren't in the runways? Or is your concern
> that you'd like to see *more* core reviews on placement stuff outside
> of the usual placement-y core reviewers (you, me, Alex, Eric, Gibi and
> Dan)?

I think placement has been getting a bit of a free ride, with constant
review and insulation from the runway process. However, I don't think
that we can stop progress on that effort while we circle around, and the
subteam/group of people that focus on placement already has a lot of
supporting cores already. So, it's cheating a little bit, but we always
said that we're not going to tell cores *not* to review something unless
it is in a runway and pragmatially I think it's probably the right thing
to do for placement.

>> Having said that, it's clear from the list of things in the runways
>> etherpad that there are some lower priority efforts that have been
>> completed probably because they leveraged runways (there are a few
>> xenapi blueprints for example, and the powervm driver changes).
>
> Wasn't that kind of the point of the runways, though? To enable "lower
> priority" efforts to have a chance at getting reviews? Or are you just
> stating here the apparent success of that effort?

It was, and I think it has worked well for that for several things. The
image signing stuff got more review in its first runway slot than it has
in years I think.

Overall, I don't think we're worse off with runways than we were before
it. I think that some things that will get attention regardless are
still progressing. I think that some things that are far off on the
fringe are still getting ignored. I think that for the huge bulk of
things in the middle of those two, runways has helped focus review on
specific efforts and thus increased the throughput there. For a first
attempt, I'd call that a success.

I think maybe a little more monitoring of the review rate of things in
the runways and some gentle prodding of people to look at ones that are
burning time and not seeing much review would maybe improve things a
bit.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-cells-v1 intermittent gate failure

2018-06-14 Thread melanie witt

On Wed, 13 Jun 2018 15:47:33 -0700, Melanie Witt wrote:

Hi everybody,

Just a heads up that we have an intermittent gate failure of the
nova-cells-v1 job happening right now [1] and a revert of the tempest
change related to it has been approved [2] and will be making its way
through the gate. The nova-cells-v1 job will be failing until [2] merges.

-melanie

[1] https://bugs.launchpad.net/nova/+bug/1776684
[2] https://review.openstack.org/575132


The fix [2] has merged, so it is now safe to recheck your changes that 
were caught up in the nova-cells-v1 gate failure.


Thanks,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-10, June 18-22

2018-06-14 Thread Sean McGinnis
Welcome to the weekly countdown email.

Development Focus
-

Teams should be focused on implementing planned work for the cycle.

It is also a good time to review those plans and reprioritize anything if
needed based on the what progress has been made and what looks realistic to
complete in the next few weeks.

General Information
---

Looking ahead to Rocky-3, please be aware of the various freeze dates. This
varies for deliverable type, starting with non-client libraries, then client
libraries, then finally services. This is to ensure we have time for
requirements updates and resolving any issues prior to RC.

Just as a reminder, we have freeze dates ahead of the first RC in order to
stabilize our requirements. Updating global requirements close to overall code
freeze increases the risk of an unforeseen side effect being introduced too
late in the cycle to properly fix. For this reason, we first freeze the
non-client libraries that may be used by service and client libraries, followed
a week later by the client libraries. This minimizes the ripple effects that
have caused projects to scamble to fix last minute issues.

Please keep these deadlines in mind as you work towards wrapping up feature
work that may require library changes to complete.


Upcoming Deadlines & Dates
--

Final non-client library release deadline: July 19
Final client library release deadline: July 26
Rocky-3 Milestone: July 26

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-14 Thread Emilien Macchi
It sounds like we merged a bunch last night thanks to the revert, so I went
ahead and restored/rechecked everything that was out of the gate. I've
checked and nothing was left over, but let me know in case I missed
something.
I'll keep updating this thread with the progress made to improve the
situation etc.
So from now, situation is back to "normal", recheck/+W is ok.

Thanks again for your patience,

On Wed, Jun 13, 2018 at 10:39 PM, Emilien Macchi  wrote:

> https://review.openstack.org/575264 just landed (and didn't timeout in
> check nor gate without recheck, so good sigh it helped to mitigate).
>
> I've restore and rechecked some patches that I evacuated from the gate,
> please do not restore others or recheck or approve anything for now, and
> see how it goes with a few patches.
> We're still working with Steve on his patches to optimize the way we
> deploy containers on the registry and are investigating how we could make
> it faster with a proxy.
>
> Stay tuned and thanks for your patience.
>
> On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi 
> wrote:
>
>> TL;DR: gate queue was 25h+, we put all patches from gate on standby, do
>> not restore/recheck until further announcement.
>>
>> We recently enabled the containerized undercloud for multinode jobs and
>> we believe this was a bit premature as the container download process
>> wasn't optimized so it's not pulling the mirrors for the same containers
>> multiple times yet.
>> It caused the job runtime to increase and probably the load on docker.io
>> mirrors hosted by OpenStack Infra to be a bit slower to provide the same
>> containers multiple times. The time taken to prepare containers on the
>> undercloud and then for the overcloud caused the jobs to randomly timeout
>> therefore the gate to fail in a high amount of times, so we decided to
>> remove all jobs from the gate by abandoning the patches temporarily (I have
>> them in my browser and will restore when things are stable again, please do
>> not touch anything).
>>
>> Steve Baker has been working on a series of patches that optimize the way
>> we prepare the containers but basically the workflow will be:
>> - pull containers needed for the undercloud into a local registry, using
>> infra mirror if available
>> - deploy the containerized undercloud
>> - pull containers needed for the overcloud minus the ones already pulled
>> for the undercloud, using infra mirror if available
>> - update containers on the overcloud
>> - deploy the containerized undercloud
>>
>> With that process, we hope to reduce the runtime of the deployment and
>> therefore reduce the timeouts in the gate.
>> To enable it, we need to land in that order: https://review.openstac
>> k.org/#/c/571613/, https://review.openstack.org/#/c/574485/,
>> https://review.openstack.org/#/c/571631/ and https://review.openstack.o
>> rg/#/c/568403.
>>
>> In the meantime, we are disabling the containerized undercloud recently
>> enabled on all scenarios: https://review.openstack.org/#/c/575264/ for
>> mitigation with the hope to stabilize things until Steve's patches land.
>> Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the
>> containerized undercloud on scenarios after checking that we don't have
>> timeouts and reasonable deployment runtimes.
>>
>> That's the plan we came with, if you have any question / feedback please
>> share it.
>> --
>> Emilien, Steve and Wes
>>
>
>
>
> --
> Emilien Macchi
>



-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review runways check-in and feedback

2018-06-14 Thread Jay Pipes

On 06/13/2018 05:33 PM, Matt Riedemann wrote:

On 6/13/2018 3:33 PM, melanie witt wrote:


We've been experimenting with a new process this cycle, Review Runways 
[1] and we're about at the middle of the cycle now as we had the r-2 
milestone last week June 7.


I wanted to start a thread and gather thoughts and feedback from the 
nova community about how they think runways have been working or not 
working and lend any suggestions to change or improve as we continue 
on in the rocky cycle.


We decided to try the runways process to increase the chances of core 
reviewers converging on the same changes and thus increasing reviews 
and merges on approved blueprint work. As of today, we have 69 
blueprints approved and 28 blueprints completed, we just passed r-2 
June 7 and r-3 is July 26 and rc1 is August 9 [2].


Do people feel like they've been receiving more review on their 
blueprints? Does it seem like we're completing more blueprints 
earlier? Is there feedback or suggestions for change that you can share?


Lots of cores are not reviewing stuff in the current runways slots, 
which defeats the purpose of runways for the most part if the majority 
of the core team aren't going to review what's in a slot.


I know I don't review a ton of stuff like you or Eric, but I just can't 
any more. It's too much for me to handle.


While I have tried to review a few of the runway-slotted efforts, I have 
gotten burned out on a number of them. Other runway-slotted efforts, I 
simply don't care enough about or once I've seen some of the code, 
simply can't bring myself to review it (sorry, just being honest).


I like the *concept* of the runways, though. It's good to have a 
focusing agent to direct reviewer attention to things that are "ready" 
for final review. Despite this focusing agent, though, we are still 
realistically limited by the small size of the Nova core team.


I'm not sure there are processes (runways or otherwise) that are going 
to increase the velocity of merging code [1] unless we increase the size 
of the core team.


It's not like we don't look for new core additions and attempt to 
identify folks that would be good cores and try to help them. We *do* do 
this.


The issue is that Nova is big, scary, messy, fragile (in many ways), 
complex and more than any other project (no offense to those other 
projects) has a virtually *endless* stream of feature requests coming 
(mostly from vendors, sorry) looking to plug their latest and greatest 
hardware into the virt world.


Until that endless stream of feature requests subsides, we will continue 
to have these problems. And, for those out there that say "well, Jay, 
then those vendors will just abandon OpenStack and go to more fertile 
feature-accepting grounds like k8s!", I say "hey, go for it."


Not everything is appropriate to jam into Nova (or OpenStack for that 
matter). Let k8s deal with the never-ending feature velocity (NFV) and 
vendor/product-enablement requests. And let them collapse under that weight.




[1] I say "increase the velocity of merging code" but keep in mind that 
Nova *already* merges the most code in all of OpenStack. We merge more 
code in Nova in a week than some service projects merge in three months. 
Our rate of code merging in just Nova often rivals larger-scoped 
monoliths like kubernetes/kubernetes.


Lots of people have ready-for-runways blueprint series that aren't 
queued up in the runways etherpad, and then ask for reviews on those 
series and I have to tell them, "throw it in the runways queue".


I'm not sure if people are thinking subteams need to review series that 
are ready for wider review first, but especially for the placement 
stuff, I think those things need to be slotted up if they are ready.


I can work with Eric to make sure placement patch series (for the 
required ones at least that are holding up other work) are queued up 
properly for runways. That said, I don't feel we are suffering from a 
lack of reviews in placement-land.


Is your concern that placement stuff is getting unfair attention since 
many of the patch series aren't in the runways? Or is your concern that 
you'd like to see *more* core reviews on placement stuff outside of the 
usual placement-y core reviewers (you, me, Alex, Eric, Gibi and Dan)?
Having said that, it's clear from the list of things in the runways 
etherpad that there are some lower priority efforts that have been 
completed probably because they leveraged runways (there are a few 
xenapi blueprints for example, and the powervm driver changes).


Wasn't that kind of the point of the runways, though? To enable "lower 
priority" efforts to have a chance at getting reviews? Or are you just 
stating here the apparent success of that effort?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-14 Thread Monty Taylor

On 06/13/2018 07:50 PM, Emilien Macchi wrote:
TL;DR: gate queue was 25h+, we put all patches from gate on standby, do 
not restore/recheck until further announcement.


We recently enabled the containerized undercloud for multinode jobs and 
we believe this was a bit premature as the container download process 
wasn't optimized so it's not pulling the mirrors for the same containers 
multiple times yet.
It caused the job runtime to increase and probably the load on docker.io 
 mirrors hosted by OpenStack Infra to be a bit slower 
to provide the same containers multiple times. The time taken to prepare 
containers on the undercloud and then for the overcloud caused the jobs 
to randomly timeout therefore the gate to fail in a high amount of 
times, so we decided to remove all jobs from the gate by abandoning the 
patches temporarily (I have them in my browser and will restore when 
things are stable again, please do not touch anything).


Steve Baker has been working on a series of patches that optimize the 
way we prepare the containers but basically the workflow will be:
- pull containers needed for the undercloud into a local registry, using 
infra mirror if available

- deploy the containerized undercloud
- pull containers needed for the overcloud minus the ones already pulled 
for the undercloud, using infra mirror if available

- update containers on the overcloud
- deploy the containerized undercloud


That sounds like a great improvement. Well done!

With that process, we hope to reduce the runtime of the deployment and 
therefore reduce the timeouts in the gate.
To enable it, we need to land in that order: 
https://review.openstack.org/#/c/571613/, 
https://review.openstack.org/#/c/574485/, 
https://review.openstack.org/#/c/571631/ and 
https://review.openstack.org/#/c/568403.


In the meantime, we are disabling the containerized undercloud recently 
enabled on all scenarios: https://review.openstack.org/#/c/575264/ for 
mitigation with the hope to stabilize things until Steve's patches land.
Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the 
containerized undercloud on scenarios after checking that we don't have 
timeouts and reasonable deployment runtimes.


That's the plan we came with, if you have any question / feedback please 
share it.

--
Emilien, Steve and Wes


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][heat][jinja] resources.RedisVirtualIP: Property error: resources.VipPort.properties.network: Error validating value 'internal_api': Unable to find network with name or id 'in

2018-06-14 Thread Mark Hamzy
I am trying to delete the Storage, StorageMgmt, Tenant, and Management 
networks and trying to deploy using TripleO.

The following patch 
https://hamzy.fedorapeople.org/0001-RedisVipPort-error-internal_api.patch 
applied on top of /usr/share/openstack-tripleo-heat-templates from 
openstack-tripleo-heat-templates-8.0.2-14.el7ost.noarch

yields the following error:

(undercloud) [stack@oscloud5 ~]$ openstack overcloud deploy --templates -e 
~/templates/node-info.yaml -e ~/templates/overcloud_images.yaml -e 
~/templates/environments/network-environment.yaml -e 
~/templates/environments/network-isolation.yaml -e 
~/templates/environments/config-debug.yaml --ntp-server pool.ntp.org 
--control-scale 1 --compute-scale 1 --control-flavor control 
--compute-flavor compute 2>&1 | tee output.overcloud.deploy
...
overcloud.RedisVirtualIP:
  resource_type: OS::TripleO::Network::Ports::RedisVipPort
  physical_resource_id:
  status: CREATE_FAILED
  status_reason: |
resources.RedisVirtualIP: Property error: 
resources.VipPort.properties.network: Error validating value 
'internal_api': Unable to find network with name or id 'internal_api'
...

The following patch seems to fix it:

8<-8<-8<-8<-8<-
diff --git a/environments/network-isolation.j2.yaml 
b/environments/network-isolation.j2.yaml
index 3d4f59b..07cb748 100644
--- a/environments/network-isolation.j2.yaml
+++ b/environments/network-isolation.j2.yaml
@@ -20,7 +20,13 @@ resource_registry:
   {%- for network in networks if network.vip and 
network.enabled|default(true) %}
   OS::TripleO::Network::Ports::{{network.name}}VipPort: 
../network/ports/{{network.name_lower|default(network.name.lower())}}.yaml
   {%- endfor %}
+{%- for role in roles -%}
+  {%- if internal_api in role.networks|default([]) and 
internal_api.enabled|default(true) %}
   OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml
+  {%- else %}
+  # Avoid weird jinja2 bugs that don't output a newline...
+  {%- endif %}
+{%- endfor -%}
 
   # Port assignments by role, edit role definition to assign networks to 
roles.
 {%- for role in roles %}
8<-8<-8<-8<-8<-

Note that I had to do an else clause because jinja2 would not output the 
newline that was outside of the for block.

Am I following the correct path to fix this issue?

-- 
Mark

You must be the change you wish to see in the world. -- Mahatma Gandhi
Never let the future disturb you. You will meet it, if you have to, with 
the same weapons of reason which today arm you against the present. -- 
Marcus Aurelius

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Erlon Cruz
Hi Thomas,

The reserved_percentage *is* taken in account for non thin provisoning
backends. So you can use it to spare the space you need for backups. It is
a per backend configuration.

If you have already tried to used it and that is not working, please let us
know what release you are using, because despite this being the current
(and proper) behavior, it might not being like this in the past.

Erlon

Em qui, 14 de jun de 2018 às 06:13, Thomas Goirand 
escreveu:

> Hi,
>
> When using cinder-backup, it first makes a snapshot, then sends the
> backup wherever it's configured. The issue is, to perform a backup, one
> needs to make a snapshot of a volume, meaning that one needs the size of
> the volume as empty space to be able to make the snapshot.
>
> So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as
> empty space on the volume node so I can do a backup of that volume.
>
> My question is: is there a way to tell cinder to reserve an amount of
> space for this kind of operation? The only thing I saw was
> reserved_percentage, but this looks like for thin provisioning only. If
> this doesn't exist, would such new option be accepted by the Cinder
> community, as a per volume node option? Or should we do it as a global
> setting?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-14 Thread Bogdan Dobrelya

On 6/14/18 3:50 AM, Emilien Macchi wrote:
TL;DR: gate queue was 25h+, we put all patches from gate on standby, do 
not restore/recheck until further announcement.


We recently enabled the containerized undercloud for multinode jobs and 
we believe this was a bit premature as the container download process 
wasn't optimized so it's not pulling the mirrors for the same containers 
multiple times yet.
It caused the job runtime to increase and probably the load on docker.io 
 mirrors hosted by OpenStack Infra to be a bit slower 
to provide the same containers multiple times. The time taken to prepare 
containers on the undercloud and then for the overcloud caused the jobs 
to randomly timeout therefore the gate to fail in a high amount of 
times, so we decided to remove all jobs from the gate by abandoning the 
patches temporarily (I have them in my browser and will restore when 
things are stable again, please do not touch anything).


Steve Baker has been working on a series of patches that optimize the 
way we prepare the containers but basically the workflow will be:
- pull containers needed for the undercloud into a local registry, using 
infra mirror if available

- deploy the containerized undercloud
- pull containers needed for the overcloud minus the ones already pulled 
for the undercloud, using infra mirror if available

- update containers on the overcloud
- deploy the containerized undercloud


Let me also note that it's may be time to introduce jobs dependencies 
[0]. Dependencies might somewhat alleviate registries/mirrors DoS 
issues, like that one we have currently, by running jobs in batches, and 
not firing of all at once.


We still have options to think of. The undercloud deployment takes 
longer than standalone, but provides better coverage therefore better 
extrapolates (and cuts off) future overcloud failures for the dependent 
jobs. Standalone is less stable yet though. The containers update check 
may be also an option for the step 1, or step 2, before the remaining 
multinode jobs execute.


Making those dependent jobs skipped, in turn, reduces DoS effects caused 
to registries and mirrors.


[0] 
https://review.openstack.org/#/q/status:open+project:openstack-infra/tripleo-ci+topic:ci_pipelines




With that process, we hope to reduce the runtime of the deployment and 
therefore reduce the timeouts in the gate.
To enable it, we need to land in that order: 
https://review.openstack.org/#/c/571613/, 
https://review.openstack.org/#/c/574485/, 
https://review.openstack.org/#/c/571631/ and 
https://review.openstack.org/#/c/568403.


In the meantime, we are disabling the containerized undercloud recently 
enabled on all scenarios: https://review.openstack.org/#/c/575264/ for 
mitigation with the hope to stabilize things until Steve's patches land.
Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the 
containerized undercloud on scenarios after checking that we don't have 
timeouts and reasonable deployment runtimes.


That's the plan we came with, if you have any question / feedback please 
share it.

--
Emilien, Steve and Wes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-14 Thread Jiří Stránský

+1

On 13.6.2018 17:50, Emilien Macchi wrote:

Alan Bishop has been highly involved in the Storage backends integration in
TripleO and Puppet modules, always here to update with new features, fix
(nasty and untestable third-party backends) bugs and manage all the
backports for stable releases:
https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22

He's also well knowledgeable of how TripleO works and how containers are
integrated, I would like to propose him as core on TripleO projects for
patches related to storage things (Cinder, Glance, Swift, Manila, and
backends).

Please vote -1/+1,
Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-14 Thread Sergii Golovatiuk
+1. Well deserved.

On Thu, Jun 14, 2018 at 11:08 AM, Bogdan Dobrelya  wrote:
> On 6/13/18 6:50 PM, Emilien Macchi wrote:
>>
>> Alan Bishop has been highly involved in the Storage backends integration
>> in TripleO and Puppet modules, always here to update with new features, fix
>> (nasty and untestable third-party backends) bugs and manage all the
>> backports for stable releases:
>>
>> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>>
>> He's also well knowledgeable of how TripleO works and how containers are
>> integrated, I would like to propose him as core on TripleO projects for
>> patches related to storage things (Cinder, Glance, Swift, Manila, and
>> backends).
>>
>> Please vote -1/+1,
>
>
> +1
>
>> Thanks!
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Thomas Goirand
Hi,

When using cinder-backup, it first makes a snapshot, then sends the
backup wherever it's configured. The issue is, to perform a backup, one
needs to make a snapshot of a volume, meaning that one needs the size of
the volume as empty space to be able to make the snapshot.

So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as
empty space on the volume node so I can do a backup of that volume.

My question is: is there a way to tell cinder to reserve an amount of
space for this kind of operation? The only thing I saw was
reserved_percentage, but this looks like for thin provisioning only. If
this doesn't exist, would such new option be accepted by the Cinder
community, as a per volume node option? Or should we do it as a global
setting?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-14 Thread Bogdan Dobrelya

On 6/13/18 6:50 PM, Emilien Macchi wrote:
Alan Bishop has been highly involved in the Storage backends integration 
in TripleO and Puppet modules, always here to update with new features, 
fix (nasty and untestable third-party backends) bugs and manage all the 
backports for stable releases:

https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22

He's also well knowledgeable of how TripleO works and how containers are 
integrated, I would like to propose him as core on TripleO projects for 
patches related to storage things (Cinder, Glance, Swift, Manila, and 
backends).


Please vote -1/+1,


+1


Thanks!
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs

2018-06-14 Thread Ghanshyam



  On Thu, 14 Jun 2018 05:55:55 +0900 Doug Hellmann  
wrote  
 > Excerpts from Doug Hellmann's message of 2018-06-13 12:19:18 -0400:
 > > Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400:
 > > > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900:
 > > > >   On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann 
 > > > >  wrote  
 > > > >  > I would like to create a version of the jobs that run as part of
 > > > >  > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works 
 > > > > under
 > > > >  > python 3. I'm not sure the best way to proceed, since that's a 
 > > > > legacy
 > > > >  > job.
 > > > >  > 
 > > > >  > I'm not sure I'm familiar enough with the job to port it to be
 > > > >  > zuulv3 native and allow us to drop the "legacy". Should I just
 > > > >  > duplicate that job and modify it and keep the new one as "legacy"
 > > > >  > too?
 > > > >  > 
 > > > >  > Is there a different job I should base the work on? I don't see 
 > > > > anything
 > > > >  > obvious in the tempest repo's .zuul.yaml file.
 > > > > 
 > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and 
 > > > > it is similar to tempest-full-py3 job except it override the 
 > > > > LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 
 > > > > based with tempest-full tests running and disable the swift services 
 > > > > 
 > > > > You can create a new job (something tempest-full-py3-src) derived from 
 > > > > 'tempest-full-py3' if all set var is ok for you like disable swift OR 
 > > > > derived  'devstack-tempest' and then build other var similar to 
 > > > > 'tempest-full-py3'.  Extra things you need to do is to add libs you 
 > > > > want to override in 'required_project' list (FYI- 
 > > > > Now LIBS_FROM_GIT is automatically set based on required projects [2]) 
 > > > > .
 > > > > 
 > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated 
 > > > > separately if needed to run or removed. 
 > > > > 
 > > > > But I am not sure which repo  should own this new job.
 > > > 
 > > > Could it be as simple as adding tempest-full-py3 with the
 > > > required-projects list updated to include the current repository? So
 > > > there isn't a special separate job, and we would just reuse
 > > > tempest-full-py3 for this?

This can work if lib-forward-testing is going to run against current lib repo 
only not cross lib or cross project. For example, if neutron want to tests 
neutron change against neutron-lib src  then this will not work. But from 
history [1] this does not seems to be scope of lib-forward-testing.

Even  we do not need to add current repo to required-projects list or in 
LIBS_FROM_GIT .  That will always from master + current patch changes. So this 
makes no change in tempest-full-py3 job and we can directly use  
tempest-full-py3 job in lib-forward-testing. Testing in [2].

And if anyone needed cross lib/project testing (like i mentioned above) then, 
it will be very easy by defining a new job derived from tempest-full-py3 and 
add required lib in required_projects list. 

 > > > 
 > > > It would be less "automatic" than the current project-template and job,
 > > > but still relatively simple to set up. Am I missing something? This
 > > > feels too easy...
 > > 
 > > I think I could define a job with a name like tempest-full-py3-src based
 > > on tempest-full-py3 and set LIBS_FROM_GIT to include
 > > {{zuul.project.name}} in the devstack_localrc vars section. If I
 > > understand correctly, that would automatically set LIBS_FROM_GIT to
 > > refer to the project that the job is attached to, which would make it
 > > easier to use from a project-template (I would also create a
 > > lib-forward-testing-py3 project template to supplement
 > > lib-forward-testing).
 > > 
 > > Does that sound right?
 > > 
 > > Doug
 > 
 > This appears to be working.
 > 
 > https://review.openstack.org/575164 adds a job to oslo.config and the
 > log shows LIBS_FROM_GIT set to oslo.config's repository:
 > 
 > http://logs.openstack.org/64/575164/1/check/tempest-full-py3-src/7a193fa/job-output.txt.gz#_2018-06-13_19_01_22_742338
 > 
 > How does the QA team feel about hosting the job definition in the
 > tempest repository with the tempest-full-py3 job? If you think that will
 > work, I can propose the patch tomorrow.
 > 

[1] https://review.openstack.org/#/c/125433
[2] https://review.openstack.org/#/c/575324

-gmann

 > Doug
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-14 Thread Martin André
On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>
> Please vote -1/+1,

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev