Re: [openstack-dev] [tripleo] FFE request for Manila integration

2016-08-26 Thread Steven Hardy
On Sat, Aug 27, 2016 at 12:26:51AM -0400, Ben Swartzlander wrote:
> The 3 patches we need to wrap up the Manila integration for TripleO still
> haven't gotten enough review attention to merge:
> 
> https://review.openstack.org/#/c/354019
> https://review.openstack.org/#/c/354014
> https://review.openstack.org/#/c/355394
> 
> Since it looks like the Feature Freeze is going to come without these having
> merge I'd like to formally request an FFE for them.

Thanks for this, as you mentioned in the other thread we lost track of this
because there's no bug or blueprint targetted to Newton in launchpad.

I've reviewed the patches, and other than a question on the t-h-t one they
look good, so lets see if we can land them prior to newton-3, and if that
doesn't happen I think a FFE is appropriate.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE request for Manila integration

2016-08-26 Thread Ben Swartzlander
The 3 patches we need to wrap up the Manila integration for TripleO 
still haven't gotten enough review attention to merge:


https://review.openstack.org/#/c/354019
https://review.openstack.org/#/c/354014
https://review.openstack.org/#/c/355394

Since it looks like the Feature Freeze is going to come without these 
having merge I'd like to formally request an FFE for them.


They're all related to Manila, which is a new in the Newton release and 
therefore I'd argue these don't add much risk. Worst case they affect 
deployments of Manila, but Manila won't be very usable without these 
anyways. Also these are small and hopefully not-hard-to-review patches.


If there's anything procedural I need to do to make these patches more 
acceptable please let me know, and I'll be watching them over the next 
few days and responding to review feedback.


thanks,
-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Core nomination for Dave Walker (Daviey on irc)

2016-08-26 Thread Vikram Hosakote (vhosakot)
+1.

Great work Dave!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, August 23, 2016 at 4:45 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][vote] Core nomination for Dave Walker (Daviey 
on irc)

Kolla core reviewers,

I am nominating Dave Walker for the Kolla core reviewer team.  His 30 day 
review stats [1] place him in the middle of the pack for reviewers and his 60 
day stats[2] are about equivalent.  Dave participates heavily in IRC and has 
done some good technical work including the Watcher playbook and container.  He 
also worked on Sensu, but since we are unclear if we are choosing Sensu or Tig, 
that work is on hold.  He will also be helping us sort out how to execute with 
PBR going into the future on our stable and master branches.  Dave has proven 
to me his reviews are well thought out and he understands the Kolla 
Architecture.  Dave is part time like many Kolla core reviewers and is 
independent of any particular affiliation.

Consider this nomination as a +1 from me.

As a reminder, a +1 vote indicates you approve of the candidate, an abstain 
vote indicates you don't care or don't know for certain, and a -1 vote 
indicates a veto.  If a veto occurs or a unanimous response is reached prior to 
our 7 day voting window which concludes on August 30th, voting will be closed 
early.

[1] http://stackalytics.com/report/contribution/kolla/30
[2] http://stackalytics.com/report/contribution/kolla/60
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] OSIC scale testing

2016-08-26 Thread Dave McCowan (dmccowan)
Steve and I just setup and kicked off Scenario #4.
The Rally test suite is running now.

This is "Fourth Deployment" from 
https://etherpad.openstack.org/p/kolla-N-midcycle-osic
This deployment is with two VIPs and TLS is configured on the external VIP.
Nodes: 3 control, 12 storage (with ceph), 100 compute.
Using OVS.

We changed:
   rally.git/rally_openstack.conf  (adding TLS)
   /etc/kolla/globals.yml  (adding TLS, changing internal VIP to .201, adding 
external VIP, changing to OVS)
   /etc/kolla/admin-openrc.sh (adding TLS)

So, whoever runs scenario #5 should double check those files match what you 
need.

--Dave McCowan

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, August 26, 2016 at 9:43 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] OSIC scale testing

Hey folks,

We have nearly automated all of the OSIC testing and there are instructions to 
follow in NEXTSTEPS.  They take about 1 hour to execute (to setup a test0 and 
then all done.  We have the cluster until the 30th.  I need folks that have 
access to help out as much as possible between now and the 30th so we can 
finish our data gathering.

Also as people go through the scenarios, can you give an update on the mailing 
list that you put in your hour on the NEXTSTEPS execution?  Thanks.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] OSIC scale testing

2016-08-26 Thread Steven Dake (stdake)
Hey folks,

We have nearly automated all of the OSIC testing and there are instructions to 
follow in NEXTSTEPS.  They take about 1 hour to execute (to setup a test0 and 
then all done.  We have the cluster until the 30th.  I need folks that have 
access to help out as much as possible between now and the 30th so we can 
finish our data gathering.

Also as people go through the scenarios, can you give an update on the mailing 
list that you put in your hour on the NEXTSTEPS execution?  Thanks.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread Paul Belanger
On Fri, Aug 26, 2016 at 09:48:26AM -0700, Clark Boylan wrote:
> On Fri, Aug 26, 2016, at 09:03 AM, Joshua Harlow wrote:
> > Hi folks (dev and more!),
> > 
> > I was having a conversation with some folks at godaddy around our future 
> > plans for a developer lab (where we can have various setups of 
> > networking, compute, storage...) for 'exploring' purposes (testing out a 
> > new LBAAS for example or ...) and as well as for 'developer' purposes 
> > (aka, the 'I need to reproduce a bug or work on a feature that requires 
> > having a setup mimicking closer to what we have in staging or
> > production').
> > 
> > And it got me thinking about how other developers (and other companies) 
> > are doing this. Do various companies have shared labs that their 
> > developers get partitions of for (periods of) usage (for example for a 
> > volume vendor I would expect this) or if you are a networking company do 
> > you hand out miniature networks (with associated gear) as needed (or do 
> > you build out such labs via SDN and software only)?
> > 
> > Then of course there are the people developing libraries (somewhat of my 
> > territory), part of that development can just be done locally and 
> > running of tox and such via that, but often times even that is not 
> > sufficient (for example pick oslo.messaging or oslo.db, altering this in 
> > ways that could pass unittests could still end up breaking its 
> > integration with other projects); so the gate helps here (but the gate 
> > really is a 'last barrier') so have folks that have been working on say 
> > zeromq or the newer amqp versions, what is the daily life of testing and 
> > exploring features and development for you?
> > 
> > Are any of the environments that people may be getting build-out on 
> > demand (ie in a cloud-like manner)? For example I could see how it could 
> > be pretty nifty to request a environment be built out with say something 
> > like the following as a descriptor language:
> > 
> > build_out:
> > nova:
> >git_url: git://git.openstack.org/openstack/nova
> >git_ref: 
> > neutron:
> >git_url: 
> >git_ref: my sha
> > 
> > topology:
> >use_standard_config: true
> >build_with_switch_type: XYZ...
> > 
> > I hope this info is not just useful to myself (and maybe it's been 
> > talked about before, but nothing of recent that I can recall) and I'd be 
> > very much interested in hearing what other companies (big and small) are 
> > doing here (and also from folks that are not associated with any 
> > company, which I guess brings in the question of the OSIC lab).
> 
> As someone that semi frequently has to reproduce gate results my current
> setup involves semi frequently building the infra test images locally
> using openstack-infra/project-config/tools/build-image.sh then booting
> this image on my workstation using kvm. With that I can easily run a
> devstack-gate reproduce.sh or tox -e whatever and have a high degree of
> confidence that my setup mirrors the gate's.
> 
> When I worked for HP I did similar on top of HPCloud. This actually
> worked very well as I could much more easily share the results. If I had
> my choice of setup I would just ask for a set of openstack cloud
> credentials with a reasonable amount of quota. Dogfooding has been a
> great way to understand how openstack works and I think it has produced
> valuable feedback helping openstack improve.
> 
> Granted this is much harder to hand out when you need specific hardware
> resources (network switches, server hardware, whatever), but with Ironic
> most of that should be doable with the "give devs cloud credentials"
> model.
> 
> As for requesting an environment with nova version foo and neutron
> version bar and topology baz devstack does this a few thousand times per
> day. It may not be everyone's preferred tool, but my suggestion would be
> to not make another tool that does this and never gets tested. Instead
> use what is being tested.
> 
> TL;DR dogfood and use openstack, it solves this problem well.
> 
With zuulv2.5 we've started to make this easier too, we now publish our
ansible-playbooks along side our job logs[1]. I have visions where we also
publish minimal images create by nodepool (minus openstack-infra SSH keys and
git / package cache). Then write a new ansible-playbook to bootstrap a node to
your local environment (ansible-role-cloud-launcher) and or public cloud
account, then fire off the same ansible-playbook we did in zuul-launcher
(obviously using the reproduce.sh script).

It sounds like a mouth full, but I don't actually think it will be too much
work.  I've started pushing up patches to zuul-launcher to do this, but haven't
asked for the code to be merged. Mostly because I think we want to wait until
zuulv3 for this.

[1] 
http://logs.openstack.org/49/361449/2/check/gate-puppet-iptables-puppet-lint/797d4da/_zuul_ansible/
[2] https://review.openstack.org/#/c/352627/


[openstack-dev] [Neutron][Nova] Neutron mid-cycle summary report

2016-08-26 Thread Armando M.
Hi Neutrinos,

For those of you who couldn't join in person, please find a few notes below
to capture some of the highlights of the event.

I would like to thank everyone one who helped me put this report together,
and everyone who helped make this mid-cycle a fruitful one.

I would also like to thank IBM, and the individual organizers who made
everything go smoothly. In particular Martin, who put up with our moody
requests: thanks Martin!!

Feel free to reach out/add if something is unclear, incorrect or incomplete.

Cheers,
Armando

~~~

We touched on these topics (as initially proposed on
*https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems
*)


   - Keystone v3 and project-id adoption:
  - dasm and amotoki have been working to making the Neutron server
  process project-id correctly [1]. Looking at the spec [2], we
are half way
  through having completed the DB migration, being Keystone v3
complaint, and
  having updated the client bindings [3].
 - [1] https://review.openstack.org/#/c/357977/
 - [2] https://review.openstack.org/#/c/257362/
 - [3] https://review.openstack.org/#/q/topic:bp/keystone-v3
  - Neutron-lib:
  - HenryG, dougwig and kevinbenton worked out a plan to get the
common_db_mixin
  into neutron-lib. Because of the risk of regression, this is
being deferred
  until Ocata opens up. However, simpler changes like the he model_base
  move to lib was agreed on and merged.
  - A plan to provide test support was discussed. The current strategy
  involves providing test base classes in lib (this reverses the stance
  conveyed in Austin). The usual steps involved require to making
  public the currently private classes, ensure the lib's copies are
  up-to-date with core neutron, and deprecate the ones located in
  Neutron.
  - rtheis and armax worked on having networking-ovn test periodically
  against neutron-lib [1,2,3].
 - [1] https://review.openstack.org/#/c/357086/
 - [2] https://review.openstack.org/#/c/359143/
 - [3] https://review.openstack.org/#/c/357079/
  - A tool (tools/migration_report.sh) helps project team determine the
  level of dependency they have with Neutron. It should be
improved to report
  the exact offending imports.
  - Right now neutron-lib 0.4.0 is released and available in
  global-requirements/upper-constraints.
   - Objects and hitless upgrades:
  - Ihar gave the team an overview and status update [1]
  - There was a fruitful discussion that hopefully set the way forward
  for Ocata. The discussed plan was to start Ocata with the
expectation that
  no new contract scripts are landing in Ocata, and to revisit the
  requirement later if for some reason we see any issue with applying the
  requirement in practice.
  - Some work was done to deliver necessary objects for
  push-notifications. Patches up for review. Some review cycles
were spent to
  work on landing patches moving model definitions under neutron/db/models
  - [1] http://lists.openstack.org/pipermail/openstack-dev/
 2016-August/101838.html
  - OSC transition:


   - rtheis gave an update to the team on the state of the transition. Core
  resources commands are all available through OSC; QoS, Metering and *-aaS
  are still not converted.
  - There is some confusion on how to tackle openstacksdk support. We
  discussed the future goal of python binding of Networking API. OSC uses
  OpenStack SDK for network commands and Neutron OSC plugin uses python
  bindings from python-neutronclient. A question is to which project
  developers who add new features implement, both, openstack SDK or
  python-neutronclient? There was no conclusion at the mid-cycle. It is not
  specific to neutron. Similar situation can happen for nova, cinder and
  other projects and we need to raise it to the community.


   - Ocata is going to be the first release where the neutronclient CLI is
  officially deprecated. It may take us more than the usual two cycles to
  remove it altogether, but that's a signal to developer and users to
  seriously develop against OSC, and report bugs against OSC.
  - Several pending contributions into osc-lib.
  - An update is available on [1,2]


   - [1] https://review.openstack.org/#/c/357844/
 - [2] https://etherpad.openstack.org/p/osc-neutron-support
  - Stability squash:
  - armax was bug deputy for the week of the mid-cycle; nothing
  critical showed up in the gate, however pluggable ipam [1] switch merged,
  which might have some unexpected repercussions down the road.
  - A number of bugs older than a year were made expirable [2].
  - kevinbenton and armax devised a strategy and started working on [3]
  to ensure DB retriable 

[openstack-dev] [oslo][all] Oslo releases for newton

2016-08-26 Thread Joshua Harlow

Hi all,

Since its that time of the cycle the following is upon us:

'Final release for non-client libraries' (Aug 22 - 26)

So I just wanted to thank all those who have put a lot of hard work into 
the various olso libraries and denote that going forward we (as a group) 
should try to work on bug fixes, docs and general testing instead of any 
further feature work (until the freeze is over).


Have a great weekend folks!

P.S.

For those who are interested here is the final listing of library 
releases for newton (scrapped from the release repository):


===
Releases for newton
===

Automaton
-

Newton version = 1.4.0
Package link = https://pypi.python.org/pypi/automaton/1.4.0

Debtcollector
-

Newton version = 1.8.0
Package link = https://pypi.python.org/pypi/debtcollector/1.8.0

Futurist


Newton version = 0.18.0
Package link = https://pypi.python.org/pypi/futurist/0.18.0

Mox3


Newton version = 0.18.0
Package link = https://pypi.python.org/pypi/mox3/0.18.0

Oslo.Cache
--

Newton version = 1.14.0
Package link = https://pypi.python.org/pypi/oslo.cache/1.14.0

Oslo.Concurrency


Newton version = 3.14.0
Package link = https://pypi.python.org/pypi/oslo.concurrency/3.14.0

Oslo.Config
---

Newton version = 3.17.0
Package link = https://pypi.python.org/pypi/oslo.config/3.17.0

Oslo.Context


Newton version = 2.9.0
Package link = https://pypi.python.org/pypi/oslo.context/2.9.0

Oslo.Db
---

Newton version = 4.13.0
Package link = https://pypi.python.org/pypi/oslo.db/4.13.0

Oslo.I18N
-

Newton version = 3.9.0
Package link = https://pypi.python.org/pypi/oslo.i18n/3.9.0

Oslo.Log


Newton version = 3.16.0
Package link = https://pypi.python.org/pypi/oslo.log/3.16.0

Oslo.Messaging
--

Newton version = 5.10.0
Package link = https://pypi.python.org/pypi/oslo.messaging/5.10.0

Oslo.Middleware
---

Newton version = 3.19.0
Package link = https://pypi.python.org/pypi/oslo.middleware/3.19.0

Oslo.Policy
---

Newton version = 1.14.0
Package link = https://pypi.python.org/pypi/oslo.policy/1.14.0

Oslo.Privsep


Newton version = 1.13.0
Package link = https://pypi.python.org/pypi/oslo.privsep/1.13.0

Oslo.Reports


Newton version = 1.14.0
Package link = https://pypi.python.org/pypi/oslo.reports/1.14.0

Oslo.Rootwrap
-

Newton version = 5.1.0
Package link = https://pypi.python.org/pypi/oslo.rootwrap/5.1.0

Oslo.Serialization
--

Newton version = 2.13.0
Package link = https://pypi.python.org/pypi/oslo.serialization/2.13.0

Oslo.Service


Newton version = 1.16.0
Package link = https://pypi.python.org/pypi/oslo.service/1.16.0

Oslo.Utils
--

Newton version = 3.16.0
Package link = https://pypi.python.org/pypi/oslo.utils/3.16.0

Oslo.Versionedobjects
-

Newton version = 1.17.0
Package link = https://pypi.python.org/pypi/oslo.versionedobjects/1.17.0

Oslo.Vmware
---

Newton version = 2.14.0
Package link = https://pypi.python.org/pypi/oslo.vmware/2.14.0

Oslosphinx
--

Newton version = 4.7.0
Package link = https://pypi.python.org/pypi/oslosphinx/4.7.0

Oslotest


Newton version = 2.10.0
Package link = https://pypi.python.org/pypi/oslotest/2.10.0

Stevedore
-

Newton version = 1.17.1
Package link = https://pypi.python.org/pypi/stevedore/1.17.1

Taskflow


Newton version = 2.6.0
Package link = https://pypi.python.org/pypi/taskflow/2.6.0

Tooz


Newton version = 1.43.0
Package link = https://pypi.python.org/pypi/tooz/1.43.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE for "HPE 3PAR Pool feature"

2016-08-26 Thread Ben Swartzlander

On 08/26/2016 02:38 PM, Mehta, jay wrote:

Hello all

,

I am requesting you all to grant me an exception for Pools feature for
HPE 3PAR driver. The patch that implements this feature is:
https://review.openstack.org/#/c/329552/implementing blueprint blueprint
hpe3par-pool-support





I have fixed tempest and py34 failures which are passing now. Also I had
Jenkins failure for some unit tests with in Huawei and share drivers,
for which I have uploaded another patch that fixes these unit test
failures: https://review.openstack.org/#/c/360088/



This is a good feature to have for us in Newton release. I have had few
code reviews in the past and I have addressed those changes. I believe
there won’t be many review comments going further and this should be
easy to merge.

This is not a big feature, and has most of the code changes specific to
3PAR driver. Unit test are implemented to increase code coverage at
desired level.



Please grant exemption for marginal delay and consider this change for
Newton release.


This FFE is granted, however there are remaining concerns about the code 
that need to be addressed. Granting this FFE isn't a guarantee that the 
patch will merge, just that we will consider it.


For the future I'd like to remind people that the FPF deadline means you 
shouldn't continue adding features to your patch after the deadline. The 
only changes we should see to patches after FPF are responses to review 
comments and resolving of merge conflicts.


-Ben



Thanks and Regards,

Jay Mehta



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Core team updates

2016-08-26 Thread Dieterly, Deklan
+1
-- 
Deklan Dieterly

Senior Systems Software Engineer
HPE




On 8/25/16, 9:33 AM, "Mathieu, Pierre-Arthur"
 wrote:

>Hello, 
>
>I would like to propose some modifications regarding the Freezer core
>team. 
>
>First, the removal of two inactive members:
>  - Fabrizio Fresco: Inactive
>  - Eldar Nugaev: Switched company and is now focusing on other projects.
>Thank you very much for your contributions.
>
>
>Secondly, I would like to propose that we promote Yang Yapeng
>(yangyapeng) core.
>He has been a highly valuable developper for the past few month, mainly
>working on integration with Nova and Cinder.
>His work can be found here: [1]
>And his stackalitics profile here: [2]
>
>
>If you agree with all these change, please approve with a +1 answer or
>explain your opinion on any of these individual modification.
>If there are no objection, I plan on applying these tomorrow evening.
>
>Thanks
>- Pierre, Freezer PTL
>
>[1]  https://review.openstack.org/#/q/owner:%22yapeng+Yang%22
>[2] http://stackalytics.com/?release=all=loc_id=yang-yapeng
>
> 
>
>
>
>
>
>
>
>
>
>
>
>
>  
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [all] New landing page for API docs collection

2016-08-26 Thread Anne Gentle
Hi all,
I've put together a solution for your review to replace
developer.openstack.org/api-ref.html with a new landing page. My idea is to
repurpose this page: http://developer.openstack.org/api-guide/quick-start/
as a collection point for all the API information. Once this lands, I will
redirect developer.openstack.org/api-ref.html to this new page.

Review is here:
https://review.openstack.org/#/c/361480/

Some projects are not in the list due to not having an API reference page
that is published to developer.openstack.org. Top of mind are cinder and
telemetry as you're super close and only need a little bit more work to get
over the line. I'm happy to get you where you want to be, just let me know
what you need.

Let me know your thoughts on how to get all the APIs in a single page as a
good starting point for consumers.
Thanks,
Anne

-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [neutron] [ironic] [api] [doc] API status report

2016-08-26 Thread Anne Gentle
Hi cinder block storage peeps:

I haven't heard from you on your comfort level with publishing so I went
ahead and made the publishing job myself with this review:

https://review.openstack.org/361475

Please let me know your thoughts there. Is the document ready to publish?
Need anything else to get comfy? Let me know.

Thanks,
Anne

On Thu, Aug 11, 2016 at 7:52 AM, Anne Gentle 
wrote:

>
>
> On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle <
> annegen...@justwriteclick.com> wrote:
>
>> Hi all,
>> I wanted to report on status and answer any questions you all have about
>> the API reference and guide publishing process.
>>
>> The expectation is that we provide all OpenStack API information on
>> developer.openstack.org. In order to meet that goal, it's simplest for
>> now to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
>> extension tooling so that users see available OpenStack APIs in a sidebar
>> navigation drop-down list.
>>
>> --Migration--
>> The current status for migration is that all WADL content is migrated
>> except for trove. There is a patch in progress and I'm in contact with the
>> team to assist in any way. https://review.openstack.org/#/c/316381/
>>
>> --Theme, extension, release requirements--
>> The current status for the theme, navigation, and Sphinx extension
>> tooling is contained in the latest post from Graham proposing a solution
>> for the release number switchover and offers to help teams as needed:
>> http://lists.openstack.org/pipermail/openstack-dev/2016-Augu
>> st/101112.html I hope to meet the requirements deadline to get those
>> changes landed. Requirements freeze is Aug 29.
>>
>> --Project coverage--
>> The current status for project coverage is that these projects are now
>> using the RST+YAML in-tree workflow and tools and publishing to
>> http://developer.openstack.org/api-ref/ so they will be
>> included in the upcoming API navigation sidebar intended to span all
>> OpenStack APIs:
>>
>> designate http://developer.openstack.org/api-ref/dns/
>> glance http://developer.openstack.org/api-ref/image/
>> heat http://developer.openstack.org/api-ref/orchestration/
>> ironic http://developer.openstack.org/api-ref/baremetal/
>> keystone http://developer.openstack.org/api-ref/identity/
>> manila http://developer.openstack.org/api-ref/shared-file-systems/
>> neutron-lib http://developer.openstack.org/api-ref/networking/
>> nova http://developer.openstack.org/api-ref/compute/
>> sahara http://developer.openstack.org/api-ref/data-processing/
>> senlin http://developer.openstack.org/api-ref/clustering/
>> swift http://developer.openstack.org/api-ref/object-storage/
>> zaqar http://developer.openstack.org/api-ref/messaging/
>>
>> These projects are using the in-tree workflow and common tools, but do
>> not have a publish job in project-config in the jenkins/jobs/projects.yaml
>> file.
>>
>> ceilometer
>>
>
> Sorry, in reviewing further today I found another project that does not
> have a publish job but has in-tree source files:
>
> cinder
>
> Team cinder: can you let me know where you are in your publishing comfort
> level? Please add an api-ref-jobs: line with a target of block-storage
> to jenkins/jobs/projects.yaml in the project-config repo to ensure
> publishing is correct.
>
> Another issue is the name of the target directory for the final URL. Team
> ironic can I change your api-ref-jobs: line to bare-metal instead of
> baremetal? It'll be better for search engines and for alignment with the
> other projects URLs: https://review.openstack.org/354135
>
> I've also uncovered a problem where a neutron project's API does not have
> an official service name, and am working on a solution but need help from
> the neutron team: https://review.openstack.org/#/c/351407
> Thanks,
> Anne
>
>
>>
>> --Projects not using common tooling--
>> These projects have API docs but are not yet using the common tooling, as
>> far as I can tell. Because of the user experience, I'm making a judgement
>> call that these cannot be included in the common navigation. I have patched
>> the projects.yaml file in the governance repo with the URLs I could
>> screen-scrape, but if I'm incorrect please do patch the projects.yaml in
>> the governance repo.
>>
>> astara
>> cloudkitty
>> congress
>> magnum
>> mistral
>> monasca
>> solum
>> tacker
>> trove
>>
>> Please reach out if you have questions or need assistance getting started
>> with the new common tooling, documented here: http://docs.openstack.or
>> g/contributor-guide/api-guides.html.
>>
>> For searchlight, looking at http://developer.openstack.org
>> /api-ref/search/ they have the build job, but the info is not complete
>> yet.
>>
>> One additional project I'm not sure what to do with is networking-nfc,
>> since I'm not sure it is considered a neutron API. Can I get help to sort
>> that question out?
>>
>> --Redirects from old pages--
>> We have been adding .htaccess redirects from the old
>> 

[openstack-dev] [nova] [os-vif] [neutron] Race in setting up linux bridge

2016-08-26 Thread Armando M.
Folks,

Today I spotted [1]. It turns out Neutron and Nova might be racing trying
to set up the bridge to provide VM with connectivity/dhcp. In the observed
failure mode, os-vif fails in [2].

I suppose we might need to protect the bridge creation and make it handle
the potential exception. We would need a similar fix for Neutron in [3].

That said, knowing there is a looming deadline [4], I'd invite folks to
keep an eye on the bug.

Many thanks,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1617447
[2]
https://github.com/openstack/os-vif/blob/master/vif_plug_linux_bridge/linux_net.py#L125
[3]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/linux/bridge_lib.py#n58
[4]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102339.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-26 Thread Ben Swartzlander

On 08/26/2016 02:04 PM, James Slagle wrote:

On Fri, Aug 26, 2016 at 12:14 PM, Steven Hardy  wrote:


1. Mistral API

We've made good progress on this over recent weeks, but several patches
remain - this is the umbrella BP, and it links several dependent BPs which
are mostly posted but need code reviews, please help by testing and
reviewing these:

https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library


Based on what's linked off of that blueprint, here's what's left:

https://blueprints.launchpad.net/tripleo/+spec/cli-deployment-via-workflow
topic branch: 
https://review.openstack.org/#/q/status:open+project:openstack/python-tripleoclient+branch:master+topic:deploy
5 patches, 2 are marked WIP, all need reviews

https://blueprints.launchpad.net/tripleo-ui/+spec/tripleo-ui-mistral-refactoring
topic branch: 
https://review.openstack.org/#/q/topic:bp/tripleo-ui-mistral-refactoring
1 tripleo-ui patch
1 tripleo-common patch that is Workflow -1
1 tripleoclient patch that I just approved

https://blueprints.launchpad.net/tripleo/+spec/roles-list-action
single patch: https://review.openstack.org/#/c/330283/, needs review

From: https://etherpad.openstack.org/p/tripleo-mistral-api ---
https://review.openstack.org/#/c/355598/ (merge conflict, needs review)
https://review.openstack.org/#/c/348875/ (just approved, should merge)
https://review.openstack.org/#/c/341572/ (just approved, should merge)

Additionally, there are the validations patches:
https://review.openstack.org/#/q/topic:mistral-validations

If I missed anything, please point it out.


There's 3 small patches that need to be included in Newton. The author, 
marios, didn't tag them with a blueprint or bug so they don't show up on 
the LP milestone page, however there's a joint NetApp/Redhat 
presentation in Barcelona which assumes we have NetApp driver support 
for Manila in TripleO.


https://review.openstack.org/#/c/354019
https://review.openstack.org/#/c/354014
https://review.openstack.org/#/c/355394

I'm hoping these can go in today, but if not I'll file an FFE for them. 
AFAIK there's no problems with these.


-Ben Swartzlander






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] FFE for "HPE 3PAR Pool feature"

2016-08-26 Thread Mehta, jay
Hello all
,
I am requesting you all to grant me an exception for Pools feature for HPE 3PAR 
driver. The patch that implements this feature is: 
https://review.openstack.org/#/c/329552/ implementing blueprint blueprint 
hpe3par-pool-support

I have fixed tempest and py34 failures which are passing now. Also I had 
Jenkins failure for some unit tests with in Huawei and share drivers, for which 
I have uploaded another patch that fixes these unit test failures: 
https://review.openstack.org/#/c/360088/

This is a good feature to have for us in Newton release. I have had few code 
reviews in the past and I have addressed those changes. I believe there won't 
be many review comments going further and this should be easy to merge.
This is not a big feature, and has most of the code changes specific to 3PAR 
driver. Unit test are implemented to increase code coverage at desired level.

Please grant exemption for marginal delay and consider this change for Newton 
release.

Thanks and Regards,
Jay Mehta
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread Clark Boylan
On Fri, Aug 26, 2016, at 10:43 AM, John Villalovos wrote:
> On Fri, Aug 26, 2016 at 9:48 AM, Clark Boylan 
> wrote:
> > As someone that semi frequently has to reproduce gate results my current
> > setup involves semi frequently building the infra test images locally
> > using openstack-infra/project-config/tools/build-image.sh then booting
> > this image on my workstation using kvm. With that I can easily run a
> > devstack-gate reproduce.sh or tox -e whatever and have a high degree of
> > confidence that my setup mirrors the gate's.
> 
> Nice to know about that! Thanks.
> 
> Just curious how does someone reproduce a multi-node devstack-gate
> job. I have selfish reasons for wanting to know :)

The simplest way to do this without a nodepool would be to boot the
number of instances you need then edit /etc/nodepool on each one of them
so that you have the files described at
http://docs.openstack.org/infra/nodepool/scripts.html#ready-script with
the appropriate info for your instances (one should be primary the
others subnodes, make sure IPs are correct for your setup,etc). Then run
the reproduce.sh script on the primary node.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Yong Sheng Gong for Tacker core team

2016-08-26 Thread Sridhar Ramaswamy
We have enough votes to proceed.

Yong - welcome to the Tacker core team!

- Sridhar

On Tue, Aug 23, 2016 at 12:06 PM, Stephen Wong 
wrote:

> +1
>
> On Tue, Aug 23, 2016 at 8:55 AM, Sridhar Ramaswamy 
> wrote:
>
>> Tackers,
>>
>> I'd like to propose Yong Sheng Gong to join the Tacker core team. Yong is
>> a seasoned OpenStacker and has been contributing to Tacker project since
>> Nov 2015 (early Mitaka). He has been the major force in helping Tacker to
>> shed its *Neutronisms*. He has low tolerance on unevenness in the code
>> base and he fixes them as he goes. Yong also participated in the Enhanced
>> Placement Awareness (EPA) blueprint in the Mitaka cycle. For Newton he took
>> up himself cleaning up the DB schema and in numerous reviews to keep the
>> project going. He has been a dependable member of the Tacker community [1].
>>
>> Please chime in with your +1 / -1 votes.
>>
>> thanks,
>> Sridhar
>>
>> [1] http://stackalytics.com/report/contribution/tacker/90
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-26 Thread James Slagle
On Fri, Aug 26, 2016 at 12:14 PM, Steven Hardy  wrote:
>
> 1. Mistral API
>
> We've made good progress on this over recent weeks, but several patches
> remain - this is the umbrella BP, and it links several dependent BPs which
> are mostly posted but need code reviews, please help by testing and
> reviewing these:
>
> https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library

Based on what's linked off of that blueprint, here's what's left:

https://blueprints.launchpad.net/tripleo/+spec/cli-deployment-via-workflow
topic branch: 
https://review.openstack.org/#/q/status:open+project:openstack/python-tripleoclient+branch:master+topic:deploy
5 patches, 2 are marked WIP, all need reviews

https://blueprints.launchpad.net/tripleo-ui/+spec/tripleo-ui-mistral-refactoring
topic branch: 
https://review.openstack.org/#/q/topic:bp/tripleo-ui-mistral-refactoring
1 tripleo-ui patch
1 tripleo-common patch that is Workflow -1
1 tripleoclient patch that I just approved

https://blueprints.launchpad.net/tripleo/+spec/roles-list-action
single patch: https://review.openstack.org/#/c/330283/, needs review

From: https://etherpad.openstack.org/p/tripleo-mistral-api ---
https://review.openstack.org/#/c/355598/ (merge conflict, needs review)
https://review.openstack.org/#/c/348875/ (just approved, should merge)
https://review.openstack.org/#/c/341572/ (just approved, should merge)

Additionally, there are the validations patches:
https://review.openstack.org/#/q/topic:mistral-validations

If I missed anything, please point it out.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread John Villalovos
On Fri, Aug 26, 2016 at 9:48 AM, Clark Boylan  wrote:
> As someone that semi frequently has to reproduce gate results my current
> setup involves semi frequently building the infra test images locally
> using openstack-infra/project-config/tools/build-image.sh then booting
> this image on my workstation using kvm. With that I can easily run a
> devstack-gate reproduce.sh or tox -e whatever and have a high degree of
> confidence that my setup mirrors the gate's.

Nice to know about that! Thanks.

Just curious how does someone reproduce a multi-node devstack-gate
job. I have selfish reasons for wanting to know :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread Joshua Harlow





As someone that semi frequently has to reproduce gate results my current
setup involves semi frequently building the infra test images locally
using openstack-infra/project-config/tools/build-image.sh then booting
this image on my workstation using kvm. With that I can easily run a
devstack-gate reproduce.sh or tox -e whatever and have a high degree of
confidence that my setup mirrors the gate's.


Cool, so that mirrors the gate, but say you want to go larger and have a 
mini-cluster with a router Y here, a lbaas of X type there and you are 
say testing a new feature in neutron (and that feature interacts with 
some hardware that isn't open); I'm wondering also how such kind of 
'vendor' or people that integrate with 'vendors' (with physical things) 
do there kind of local dev or integration or  work.




When I worked for HP I did similar on top of HPCloud. This actually
worked very well as I could much more easily share the results. If I had
my choice of setup I would just ask for a set of openstack cloud
credentials with a reasonable amount of quota. Dogfooding has been a
great way to understand how openstack works and I think it has produced
valuable feedback helping openstack improve.


Agreed, no doubt; it works great when everything is software-defined 
(for better or worse everything doesn't appear there just quite yet, at 
some point things hit real hardware).




Granted this is much harder to hand out when you need specific hardware
resources (network switches, server hardware, whatever), but with Ironic
most of that should be doable with the "give devs cloud credentials"
model.



Yup.


As for requesting an environment with nova version foo and neutron
version bar and topology baz devstack does this a few thousand times per
day. It may not be everyone's preferred tool, but my suggestion would be
to not make another tool that does this and never gets tested. Instead
use what is being tested.


Right, not suggesting another tool, just 
wondering/brainstorming/thinking about what others are using for this 
kind of stuff (especially in dev work that involves > 1 
machine/instance/ and more closely matches 
an actual deployment; because for better or worse some features and bugs 
can only be worked on or reproduced in more complicated setups).




TL;DR dogfood and use openstack, it solves this problem well.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread John Griffith
On Fri, Aug 26, 2016 at 10:20 AM, Ed Leafe  wrote:

> On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:
>
> > One other thing to note is that while a flavor constrains how much local
> > disk is used it does not constrain volume size at all. So a user can
> > specify an ephemeral/swap disk <= to what the flavor provides but can
> > have an arbitrary sized root disk if it's a remote volume.
>
> This kind of goes to the heart of the argument against flavors being the
> sole source of truth for a request. As cloud evolves, we keep packing more
> and more stuff into a concept that was originally meant to only divide up
> resources that came bundled together (CPU, RAM, and local disk). This
> hasn’t been a good solution for years, and the sooner we start accepting
> that a request can be much more complex than a flavor can adequately
> express, the better.
>
> If we have decided that remote volumes are a good thing (I don’t think
> there’s any argument there), then we should treat that part of the request
> as being as fundamental as a flavor. We need to make the scheduler smarter
> so that it doesn’t rely on flavor as being the only source of truth.
>
​+1​


>
> The first step to improving Nova is admitting we have a problem. :)
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread Clark Boylan
On Fri, Aug 26, 2016, at 09:03 AM, Joshua Harlow wrote:
> Hi folks (dev and more!),
> 
> I was having a conversation with some folks at godaddy around our future 
> plans for a developer lab (where we can have various setups of 
> networking, compute, storage...) for 'exploring' purposes (testing out a 
> new LBAAS for example or ...) and as well as for 'developer' purposes 
> (aka, the 'I need to reproduce a bug or work on a feature that requires 
> having a setup mimicking closer to what we have in staging or
> production').
> 
> And it got me thinking about how other developers (and other companies) 
> are doing this. Do various companies have shared labs that their 
> developers get partitions of for (periods of) usage (for example for a 
> volume vendor I would expect this) or if you are a networking company do 
> you hand out miniature networks (with associated gear) as needed (or do 
> you build out such labs via SDN and software only)?
> 
> Then of course there are the people developing libraries (somewhat of my 
> territory), part of that development can just be done locally and 
> running of tox and such via that, but often times even that is not 
> sufficient (for example pick oslo.messaging or oslo.db, altering this in 
> ways that could pass unittests could still end up breaking its 
> integration with other projects); so the gate helps here (but the gate 
> really is a 'last barrier') so have folks that have been working on say 
> zeromq or the newer amqp versions, what is the daily life of testing and 
> exploring features and development for you?
> 
> Are any of the environments that people may be getting build-out on 
> demand (ie in a cloud-like manner)? For example I could see how it could 
> be pretty nifty to request a environment be built out with say something 
> like the following as a descriptor language:
> 
> build_out:
> nova:
>git_url: git://git.openstack.org/openstack/nova
>git_ref: 
> neutron:
>git_url: 
>git_ref: my sha
> 
> topology:
>use_standard_config: true
>build_with_switch_type: XYZ...
> 
> I hope this info is not just useful to myself (and maybe it's been 
> talked about before, but nothing of recent that I can recall) and I'd be 
> very much interested in hearing what other companies (big and small) are 
> doing here (and also from folks that are not associated with any 
> company, which I guess brings in the question of the OSIC lab).

As someone that semi frequently has to reproduce gate results my current
setup involves semi frequently building the infra test images locally
using openstack-infra/project-config/tools/build-image.sh then booting
this image on my workstation using kvm. With that I can easily run a
devstack-gate reproduce.sh or tox -e whatever and have a high degree of
confidence that my setup mirrors the gate's.

When I worked for HP I did similar on top of HPCloud. This actually
worked very well as I could much more easily share the results. If I had
my choice of setup I would just ask for a set of openstack cloud
credentials with a reasonable amount of quota. Dogfooding has been a
great way to understand how openstack works and I think it has produced
valuable feedback helping openstack improve.

Granted this is much harder to hand out when you need specific hardware
resources (network switches, server hardware, whatever), but with Ironic
most of that should be doable with the "give devs cloud credentials"
model.

As for requesting an environment with nova version foo and neutron
version bar and topology baz devstack does this a few thousand times per
day. It may not be everyone's preferred tool, but my suggestion would be
to not make another tool that does this and never gets tested. Instead
use what is being tested.

TL;DR dogfood and use openstack, it solves this problem well.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] collaboration request with vendors

2016-08-26 Thread Emilien Macchi
On Thu, Aug 25, 2016 at 5:53 PM, Qasim Sarfraz  wrote:
> Steven/Emilien,
>
> PLUMgrid will be happy to collaborate in the effort. A much needed effort
> for healthy integration of vendors with TripleO.
>
> What level of commitment would be expected from our side for this effort? As
> Steve mentioned each vendor will have some requirements like customizing the
> overcloud images so lets list them down to scope the effort.

Since Plumgrid is a proprietary software I'm not sure now how we could
test it to be honest but you might have some idea to do it.
If that's something we can download from the internet and install
during a tripleo deployment, then we could use our current multinode
jobs that run in OpenStack Infra.
It it require specific hardware or more resources than we can provide,
we might consider third party CI using plumgrid servers if you're
willing to it.

Let's first see how we could install your software and from there
investigate a first integration.

Thanks for your collaboration!

> Let me know if you want to discuss this in any TripleO meeting.
>
>
> On Thu, Aug 25, 2016 at 6:20 PM, Steven Hardy  wrote:
>>
>> On Wed, Aug 24, 2016 at 03:11:38PM -0400, Emilien Macchi wrote:
>> > TripleO does support multiple vendors for different type of backends.
>> > Here are some examples:
>> > Neutron networking: Cisco, Nuage, Opencontrail, Midonet, Plumgrid,
>> > Biswitch
>> > Cinder: Dell, Netapp, Ceph
>> >
>> > TripleO developers are struggling to maintain the environment files
>> > that allow to deploy those backends because it's very hard to test
>> > them:
>> > - not enough hardware
>> > - zero knowledge at how to deploy the actual backend system
>> > - no time to test all backends
>> >
>> > Recently, we made some changes in TripleO CI that will help us to
>> > scale the way we test TripleO in the future.
>> > One of those changes is that we can now deploy TripleO using nodepool
>> > instances like devstack jobs.
>> >
>> > I wrote a prototype of TripleO job scenario:
>> > https://review.openstack.org/#/c/360039/ that will allow us to have
>> > more CI jobs with less services installed on each, so we can save
>> > performances while increasing services coverage.
>> > I would like to re-use those bits to test our vendors backends.
>> >
>> > Here's the proposal:
>> > - for vendors backends that can be deployed using TripleO itself
>> > (open-source backend systems like OpenContrail, Midonet, etc): we
>> > could re-use the scenario approach by adding new scenarios for each
>> > backend.
>> > The jobs would only be triggered if we touch environment files related
>> > on the backend in THT or the puppet profiles for the backend in
>> > puppet-tripleo or the puppet backend class in puppet-neutron for the
>> > backend (all thanks to Zuul magic).
>>
>> This sounds good, my only concern is how we handle things breaking when
>> something outside of tripleo changes (e.g triage of bugs related to the
>> vendor backends).
>>
>> If we can get some commitment folks will show up to help with that then
>> definitely +1 on doing this.
>>
>> There are some additional complexities around images we'll need to
>> consider
>> too, as some (all?) of these backends require customization of the
>> overcloud images (e.g adding some additional pieces related to the enabled
>> vendor backend).
>>
>> > - for vendors backends that can't be deployed using TripleO itself
>> > (not implemented in the services and / or not open-source):
>> > Like most of you probably did for devstack jobs in neutron/cinder's
>> > gates, work with us to implement CI jobs that would deploy TripleO
>> > with your backend. I don't have the exact technical solution right
>> > now, but at least I would like to know who would be interested by this
>> > collaboration.
>>
>> This also sounds good, but it's unclear to me atm if we have any folks
>> willing to step up and do this work.  If people with bandwidth to do this
>> can be identified then it would be good investigate.
>>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Regards,
> Qasim Sarfraz
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Ed Leafe
On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:

> One other thing to note is that while a flavor constrains how much local
> disk is used it does not constrain volume size at all. So a user can
> specify an ephemeral/swap disk <= to what the flavor provides but can
> have an arbitrary sized root disk if it's a remote volume.

This kind of goes to the heart of the argument against flavors being the sole 
source of truth for a request. As cloud evolves, we keep packing more and more 
stuff into a concept that was originally meant to only divide up resources that 
came bundled together (CPU, RAM, and local disk). This hasn’t been a good 
solution for years, and the sooner we start accepting that a request can be 
much more complex than a flavor can adequately express, the better.

If we have decided that remote volumes are a good thing (I don’t think there’s 
any argument there), then we should treat that part of the request as being as 
fundamental as a flavor. We need to make the scheduler smarter so that it 
doesn’t rely on flavor as being the only source of truth.

The first step to improving Nova is admitting we have a problem. :)


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-26 Thread Steven Hardy
Hi all,

There have been some discussions on $subject recently, so I wanted to give
a status update.

Next week we will tag our newton-3 release, and we're currently working to
either land or defer the remaining features tracked here:

https://launchpad.net/tripleo/+milestone/newton-3

We need to land as many of the "Needs Code Review" features as we can
before cutting the release next week, so please help by prioritizing your
reviews.

--
Feature Freeze
--

After newton-3 is released, we'll have reached Feature Freeze for Newton,
and any features landed after this point must be agreed as feature freeze
exceptions (everything else will be deferred to Ocata) - the process is to
mail this list with a justification, and details of which patches need to
land, then we'll reach consensus over if it will be accepted or not based
on the level of risk, and the status of the patches.

Currently there are three potential FFEs which I'm aware of:

1. Mistral API

We've made good progress on this over recent weeks, but several patches
remain - this is the umbrella BP, and it links several dependent BPs which
are mostly posted but need code reviews, please help by testing and
reviewing these:

https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library

2. Composable Roles

There are two parts to this, some remaining cleanups related to per-service
configuration (such as bind_ip's) which need to land, and the related
custom-roles feature:

https://bugs.launchpad.net/tripleo/+bug/1604414

https://blueprints.launchpad.net/tripleo/+spec/custom-roles

Some patches still need to be fixed or written to enable custom-roles -
it's a stretch but I'd say a FFE may be reasonable provided we can get the
remaining patches functional in the next few days (I'm planning to do this)

3. Contrail integration

There are patches posted for this, but they need work - Carlos is helping
so I'd suggest it should be possible to land these as a FFE (should be low
risk as it's all disabled by default)

https://blueprints.launchpad.net/tripleo/+spec/contrail-services

These are the main features I'm aware of that are targetted to newton-3
but will probably slip, are there others folks want to raise?


Bugs


Any bugs not fixed by newton-3 will be deferred to an RC1 milestone I
created, so that we can track remaining release-blocker bugs in the weeks
leading to the final release.  Please ensure all bugs are targetted to this
milestone so we don't miss them.

https://launchpad.net/tripleo/+milestone/newton-rc1

Please let me know if there are any questions or concerns, and thanks to
everyone for all the help getting to this point, it's been a tough but
productive cycle, and I'm looking forward to reaching our final newton
release! :)

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Tim Bell

On 26 Aug 2016, at 17:44, Andrew Laski 
> wrote:




On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:


On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski 
> wrote:


On Fri, Aug 26, 2016, at 03:44 
AM,kostiantyn.volenbovs...@swisscom.com
wrote:
> Hi,
> option 1 (=that's what patches suggest) sounds totally fine.
> Option 3 > Allow block device mappings, when present, to mostly determine
> instance  packing
> sounds like option 1+additional logic (=keyword 'mostly')
> I think I miss to understand the part of 'undermining the purpose of the
> flavor'
> Why new behavior might require one more parameter to limit number of
> instances of host?
> Isn't it that those VMs will be under control of other flavor
> constraints, such as CPU and RAM anyway and those will be the ones
> controlling 'instance packing'?

Yes it is possible that CPU and RAM could be controlling instance
packing. But my understanding is that since those are often
oversubscribed
I don't understand why the oversubscription ratio matters here?


My experience is with environments where the oversubscription was used to be a 
little loose with how many vCPUs were allocated or how much RAM was allocated 
but disk was strictly controlled.




while disk is not that it's actually the disk amounts
that control the packing on some environments.
Maybe an explanation of what you mean by "packing" here.  Customers that I've 
worked with over the years have used CPU and Mem as their levers and the main 
thing that they care about in terms of how many Instances go on a Node.  I'd 
like to learn more about why that's wrong and that disk space is the mechanism 
that deployers use for this.


By packing I just mean the various ways that different flavors fit on a host. A 
host may be designed to hold 1 xlarge, or 2 large, or 4 mediums, or 1 large and 
2 mediums, etc... The challenge I see here is that the constraint can be 
managed by using CPU or RAM or disk or some combination of the three. For 
deployers just using disk the above patches will change behavior for them.

It's not wrong to use CPU/RAM, but it's not what everyone is doing. One purpose 
of this email was to gauge if it would be acceptable to only use CPU/RAM for 
packing.




But that is a sub option
here, just document that disk amounts should not be used to determine
flavor packing on hosts and instead CPU and RAM must be used.

> Does option 3 covers In case someone relied on eg. flavor root disk for
> disk volume booted from volume - and now instance packing will change
> once patches are implemented?

That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
swap/ephemeral=0 the deployer is stating that they want only 4 instances
on that host.
How do you arrive at that logic?  What if they actually wanted a single 
VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining resources split 
among Instances that were all 1 VCPU, 1 G ram and a 1 G root disk?

My example assumes the one stated flavor. But if they have a smaller flavor 
then more than 4 instances would fit.


If there is CPU and RAM oversubscription enabled then by
using volumes a user could end up with more than 4 instances on that
host. So a max_instances=4 setting could solve that. However I don't
like the idea of adding a new config, and I think it's too simplistic to
cover more complex use cases. But it's an option.

I would venture to guess that most Operators would be sad to read that.  So 
rather than give them an explicit lever that does exactly what they want 
clearly and explicitly we should make it as complex as possible and have it be 
the result of a 4 or 5 variable equation?  Not to mention it's completely 
dynamic (because it seems like
lots of clouds have more than one flavor).

Is that lever exactly what they want? That's part of what I'd like to find out 
here. But currently it's possible to setup a situation where 1 large flavor or 
4 small flavors fit on a host. So would the max_instances=4 setting be desired? 
Keeping in mind that if the above patches merged 4 large flavors could be put 
on that host if they only use remote volumes and aren't using proper CPU/RAM 
limits.

I probably was not clear enough in my original description or made some bad 
assumptions. The concern I have is that if someone is currently relying on disk 
sizes for their instance limits then the above patches change behavior for them 
and affect capacity limits and planning. Is this okay and if not what do we do?


From a single operator perspective, we’d prefer an option which would allow 
boot from volume with a larger size than the flavour. The quota for volumes 
would avoid abuse.

The use cases we encounter are a standard set of flavors with defined 
core/memory/disk ratios which correspond to the 

[openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-26 Thread Joshua Harlow

Hi folks (dev and more!),

I was having a conversation with some folks at godaddy around our future 
plans for a developer lab (where we can have various setups of 
networking, compute, storage...) for 'exploring' purposes (testing out a 
new LBAAS for example or ...) and as well as for 'developer' purposes 
(aka, the 'I need to reproduce a bug or work on a feature that requires 
having a setup mimicking closer to what we have in staging or production').


And it got me thinking about how other developers (and other companies) 
are doing this. Do various companies have shared labs that their 
developers get partitions of for (periods of) usage (for example for a 
volume vendor I would expect this) or if you are a networking company do 
you hand out miniature networks (with associated gear) as needed (or do 
you build out such labs via SDN and software only)?


Then of course there are the people developing libraries (somewhat of my 
territory), part of that development can just be done locally and 
running of tox and such via that, but often times even that is not 
sufficient (for example pick oslo.messaging or oslo.db, altering this in 
ways that could pass unittests could still end up breaking its 
integration with other projects); so the gate helps here (but the gate 
really is a 'last barrier') so have folks that have been working on say 
zeromq or the newer amqp versions, what is the daily life of testing and 
exploring features and development for you?


Are any of the environments that people may be getting build-out on 
demand (ie in a cloud-like manner)? For example I could see how it could 
be pretty nifty to request a environment be built out with say something 
like the following as a descriptor language:


build_out:
   nova:
  git_url: git://git.openstack.org/openstack/nova
  git_ref: 
   neutron:
  git_url: 
  git_ref: my sha
   
   topology:
  use_standard_config: true
  build_with_switch_type: XYZ...

I hope this info is not just useful to myself (and maybe it's been 
talked about before, but nothing of recent that I can recall) and I'd be 
very much interested in hearing what other companies (big and small) are 
doing here (and also from folks that are not associated with any 
company, which I guess brings in the question of the OSIC lab).


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-26 Thread Mike Bayer



On 08/25/2016 01:13 PM, Steve Martinelli wrote:

The keystone team is pursuing a trigger-based approach to support
rolling, zero-downtime upgrades. The proposed operator experience is
documented here:

  http://docs.openstack.org/developer/keystone/upgrading.html

This differs from Nova and Neutron's approaches to solve for rolling
upgrades (which use oslo.versionedobjects), however Keystone is one of
the few services that doesn't need to manage communication between
multiple releases of multiple service components talking over the
message bus (which is the original use case for oslo.versionedobjects,
and for which it is aptly suited). Keystone simply scales horizontally
and every node talks directly to the database.



Hi Steve -

I'm a strong proponent of looking into the use of triggers to smooth 
upgrades between database versions.Even in the case of projects 
using versioned objects, it still means a SQL layer has to include 
functionality for both versions of a particular schema change which 
itself is awkward.   I'm also still a little worried that not every case 
of this can be handled by orchestration at the API level, and not as a 
single SQL layer method that integrates both versions of a schema change.


Using triggers would resolve the issue of SQL-specific application code 
needing to refer to two versions of a schema at once, at least for those 
areas where triggers and SPs can handle it.   In the "ideal", it means 
all the Python code can just refer to one version of a schema, and nuts 
and bolts embedded into database migrations would handle all the 
movement between schema versions, including the phase between expand and 
contract.   Not that I think the "ideal" is ever going to be realized 
100%, but maybe in some / many places, this can work.


So if Keystone wants to be involved in paving the way for working with 
triggers, IMO this would benefit other projects in that they could 
leverage this kind of functionality in those places where it makes sense.


The problem of "zero downtime database migrations" is an incredibly 
ambitious goal and I think it would be wrong to exclude any one 
particular technique in pursuing this.  A real-world success story would 
likely integrate many different techniques as they apply to specific 
scenarios, and triggers and SPs IMO are a really major one which I 
believe can be supported.





Database triggers are obviously a new challenge for developers to write,
honestly challenging to debug (being side effects), and are made even
more difficult by having to hand write triggers for MySQL, PostgreSQL,
and SQLite independently (SQLAlchemy offers no assistance in this case),
as seen in this patch:


So I would also note that we've been working on the availability of 
triggers and stored functions elsewhere, a very raw patch that is to be 
largely rolled into oslo.db is here:


https://review.openstack.org/#/c/314054/

This patch makes use of an Alembic pattern called "replaceable object", 
which is intended specifically as a means of versioning things like 
triggers and stored procedures:


http://alembic.zzzcomputing.com/en/latest/cookbook.html#replaceable-objects

Within the above Neutron patch, one thing I want to move towards is that 
things like triggers and SPs would only need to be specified once, in 
the migration layer, and not within the model.   To achieve this, tests 
that work against MySQL and Postgresql would need to ensure that the 
test schema is built up using migrations, and not create_all.  This is 
already the case in some places and not in others.  There is work 
ongoing in oslo.db to provide a modernized fixture system that supports 
enginefacade cleanly as well as allows for migrations to be used 
efficiently (read: once per many tests) for all MySQL/Postgresql test 
suites, athttps://review.openstack.org/#/c/351411/ .


As far as SQLite, I have a simple opinion with SQLite which is that 
migrations, triggers, and SPs should not be anywhere near a SQLite 
database.   SQLite should be used strictly for simple model unit tests, 
the schema is created using create_all(), and that's it.   The test 
fixture system accommodates this as well.




Our primary concern at this point are how to effectively test the
triggers we write against our supported database systems, and their
various deployment variations. We might be able to easily drop SQLite
support (as it's only supported for our own test suite), but should we
expect variation in support and/or actual behavior of triggers across
the MySQLs, MariaDBs, Perconas, etc, of the world that would make it
necessary to test each of them independently? If you have operational
experience working with triggers at scale: are there landmines that we
need to be aware of? What is it going to take for us to say we support
*zero* dowtime upgrades with confidence?


*zero* downtime is an extremely difficult goal.   I appreciate that 
people are generally nervous about making more use of 

Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Andrew Laski



On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:
>
>
> On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski
>  wrote:
>>
>>
>> On Fri, Aug 26, 2016, at 03:44
>> AM,kostiantyn.volenbovs...@swisscom.com
>>  wrote:
>> > Hi,
>>  > option 1 (=that's what patches suggest) sounds totally fine.
>>  > Option 3 > Allow block device mappings, when present, to mostly
>>  > determine instance  packing sounds like option 1+additional logic
>>  > (=keyword 'mostly') I think I miss to understand the part of
>>  > 'undermining the purpose of the flavor' Why new behavior might
>>  > require one more parameter to limit number of instances of host?
>>  > Isn't it that those VMs will be under control of other flavor
>>  > constraints, such as CPU and RAM anyway and those will be the ones
>>  > controlling 'instance packing'?
>>
>> Yes it is possible that CPU and RAM could be controlling instance
>>  packing. But my understanding is that since those are often
>>  oversubscribed
> I don't understand why the oversubscription ratio matters here?
>

My experience is with environments where the oversubscription was used
to be a little loose with how many vCPUs were allocated or how much RAM
was allocated but disk was strictly controlled.

>
>
>
>> while disk is not that it's actually the disk amounts
>>  that control the packing on some environments.
> Maybe an explanation of what you mean by "packing" here.  Customers
> that I've worked with over the years have used CPU and Mem as their
> levers and the main thing that they care about in terms of how many
> Instances go on a Node.  I'd like to learn more about why that's wrong
> and that disk space is the mechanism that deployers use for this.
>

By packing I just mean the various ways that different flavors fit on a
host. A host may be designed to hold 1 xlarge, or 2 large, or 4 mediums,
or 1 large and 2 mediums, etc... The challenge I see here is that the
constraint can be managed by using CPU or RAM or disk or some
combination of the three. For deployers just using disk the above
patches will change behavior for them.

It's not wrong to use CPU/RAM, but it's not what everyone is doing. One
purpose of this email was to gauge if it would be acceptable to only use
CPU/RAM for packing.


>
>
>> But that is a sub option
>>  here, just document that disk amounts should not be used to
>>  determine
>>  flavor packing on hosts and instead CPU and RAM must be used.
>>
>>  > Does option 3 covers In case someone relied on eg. flavor root
>>  > disk for disk volume booted from volume - and now instance packing
>>  > will change once patches are implemented?
>>
>> That's the goal. In a simple case of having hosts with 16 CPUs,
>> 128GB of
>>  RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB,
>>  root_gb=500GB,
>>  swap/ephemeral=0 the deployer is stating that they want only 4
>>  instances
>>  on that host.
> How do you arrive at that logic?  What if they actually wanted a
> single VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining
> resources split among Instances that were all 1 VCPU, 1 G ram and a 1
> G root disk?

My example assumes the one stated flavor. But if they have a smaller
flavor then more than 4 instances would fit.

>
>> If there is CPU and RAM oversubscription enabled then by
>>  using volumes a user could end up with more than 4 instances on that
>>  host. So a max_instances=4 setting could solve that. However I don't
>>  like the idea of adding a new config, and I think it's too
>>  simplistic to
>>  cover more complex use cases. But it's an option.
>
> I would venture to guess that most Operators would be sad to read
> that.  So rather than give them an explicit lever that does exactly
> what they want clearly and explicitly we should make it as complex as
> possible and have it be the result of a 4 or 5 variable equation?  Not
> to mention it's completely dynamic (because it seems like
> lots of clouds have more than one flavor).

Is that lever exactly what they want? That's part of what I'd like to
find out here. But currently it's possible to setup a situation where 1
large flavor or 4 small flavors fit on a host. So would the
max_instances=4 setting be desired? Keeping in mind that if the above
patches merged 4 large flavors could be put on that host if they only
use remote volumes and aren't using proper CPU/RAM limits.

I probably was not clear enough in my original description or made some
bad assumptions. The concern I have is that if someone is currently
relying on disk sizes for their instance limits then the above patches
change behavior for them and affect capacity limits and planning. Is
this okay and if not what do we do?


>
> All I know is that the current state is broken.  It's not just the
> scheduling problem, I could live with that probably since it's too
> hard to fix... but keep in mind that you're reporting the complete
> wrong information for the Instance in these cases.  My flavor says
> it's 5G, but in 

Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-26 Thread James Slagle
On Thu, Aug 25, 2016 at 9:49 AM, James Slagle  wrote:
> On Thu, Aug 25, 2016 at 5:40 AM, Derek Higgins  wrote:
>> On 25 August 2016 at 02:56, Paul Belanger  wrote:
>>> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
 The latest recurring problem that is failing a lot of the nonha ssl
 jobs in tripleo-ci is:

 https://bugs.launchpad.net/tripleo/+bug/1616144
 tripleo-ci: nonha jobs failing with Unable to establish connection to
 https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e89-4ea9-a34b-6084dc74b5c1

 This error happens while polling for events from the overcloud stack
 by tripleoclient.

 I can reproduce this error very easily locally by deploying with an
 ssl undercloud with 6GB ram and 2 vcpus. If I don't enable swap,
 something gets OOM killed. If I do enable swap, swap gets used (< 1GB)
 and then I hit this error almost every time.

 The stack keeps deploying but the client has died, so the job fails.
 My investigation so far has only pointed out that it's the swap
 allocation that is delaying things enough to cause the failure.

 We do not see this error in the ha job even though it deploys more
 nodes. As of now, my only suspect is that it's the overhead of the
 initial SSL connections causing the error.

 If I test with 6GB ram and 4 vcpus I can't reproduce the error,
 although much more swap is used due to the increased number of default
 workers for each API service.

 However, I suggest we just raise the undercloud specs in our jobs to
 8GB ram and 4 vcpus. These seem reasonable to me because those are the
 default specs used by infra in all of their devstack single and
 multinode jobs spawned on all their other cloud providers. Our own
 multinode job for the undercloud/overcloud and undercloud only job are
 running on instances of these sizes.

>>> Close, our current flavors are 8vCPU, 8GB RAM, 80GB HDD. I'd recommend doing
>>> that for the undercloud just to be consistent.
>>
>> The HD on most of the compute nodes are 200GB so we've been trying
>> really hard[1] to keep the disk usage for each instance down so that
>> we can fit as many instances onto each compute nodes as possible
>> without being restricted by the HD's. We've also allowed nova to
>> overcommit on storage by a factor of 3. The assumption is that all of
>> the instances are short lived and a most of them never fully exhaust
>> the storage allocated to them. Even the ones that do (the undercloud
>> being the one that does) hit peak at different times so everything is
>> tickety boo.
>>
>> I'd strongly encourage against using a flavor with a 80GB HDD, if we
>> increase the disk space available to the undercloud to 80GB then we
>> will eventually be using it in CI. And 3 undercloud on the same
>> compute node will end up filling up the disk on that host.
>
> I've gone ahead and made the changes to the undercloud flavor in rh1
> to use 8GB ram and 4 vcpus. I left the disk at 40. I'd like to see use
> the same flavor specs as the default infra flavor, but going up to
> 8vcpus would require configuring less workers per api service I think.
> That's something we can iterate towards I think.

It looks like this has had the desired positive effect in the nonha jobs.

Most of the failures now are due to timeouts. When we feel like CI is
stable enough and no adverse effects from the additional resource
usage have been found, it would be worth considering moving forward
with:

https://review.openstack.org/#/c/359481/

to help with the timeouts.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread John Griffith
On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski  wrote:

>
>
> On Fri, Aug 26, 2016, at 03:44 AM, kostiantyn.volenbovs...@swisscom.com
> wrote:
> > Hi,
> > option 1 (=that's what patches suggest) sounds totally fine.
> > Option 3 > Allow block device mappings, when present, to mostly determine
> > instance  packing
> > sounds like option 1+additional logic (=keyword 'mostly')
> > I think I miss to understand the part of 'undermining the purpose of the
> > flavor'
> > Why new behavior might require one more parameter to limit number of
> > instances of host?
> > Isn't it that those VMs will be under control of other flavor
> > constraints, such as CPU and RAM anyway and those will be the ones
> > controlling 'instance packing'?
>
> Yes it is possible that CPU and RAM could be controlling instance
> packing. But my understanding is that since those are often
> oversubscribed

​I don't understand why the oversubscription ratio matters here?
​


> while disk is not that it's actually the disk amounts
> that control the packing on some environments.

​Maybe an explanation of what you mean by "packing" here.  Customers that
I've worked with over the years have used CPU and Mem as their levers and
the main thing that they care about in terms of how many Instances go on a
Node.  I'd like to learn more about why that's wrong and that disk space is
the mechanism that deployers use for this.
​


> But that is a sub option
> here, just document that disk amounts should not be used to determine
> flavor packing on hosts and instead CPU and RAM must be used.
>
> > Does option 3 covers In case someone relied on eg. flavor root disk for
> > disk volume booted from volume - and now instance packing will change
> > once patches are implemented?
>
> That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
> RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
> swap/ephemeral=0 the deployer is stating that they want only 4 instances
> on that host.

​How do you arrive at that logic?  What if they actually wanted a single
VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining resources
split among Instances that were all 1 VCPU, 1 G ram and a 1 G root disk?

If there is CPU and RAM oversubscription enabled then by
> using volumes a user could end up with more than 4 instances on that
> host. So a max_instances=4 setting could solve that. However I don't
> like the idea of adding a new config, and I think it's too simplistic to
> cover more complex use cases. But it's an option.
>
​
​I would venture to guess that most Operators would be sad to read that.
So rather than give them an explicit lever that does exactly what they want
clearly and explicitly we should make it as complex as possible and have it
be the result of a 4 or 5 variable equation?  Not to mention it's
completely dynamic (because it seems like lots of clouds have more than one
flavor).

All I know is that the current state is broken.  It's not just the
scheduling problem, I could live with that probably since it's too hard to
fix... but keep in mind that you're reporting the complete wrong
information for the Instance in these cases.  My flavor says it's 5G, but
in reality it's 200 or whatever.  Rather than make it perfect we should
just fix it.  Personally I thought the proposals for a scheduler check and
the addition of the Instances/Node option was a win win for everyone.  What
am I missing?  Would you rather a custom filter scheduler so it wasn't a
config option?
​

>
> >
> > BR,
> > Konstantin
> >
> > > -Original Message-
> > > From: Andrew Laski [mailto:and...@lascii.com]
> > > Sent: Thursday, August 25, 2016 10:20 PM
> > > To: openstack-dev@lists.openstack.org
> > > Cc: openstack-operat...@lists.openstack.org
> > > Subject: [Openstack-operators] [Nova] Reconciling flavors and block
> device
> > > mappings
> > >
> > > Cross posting to gather some operator feedback.
> > >
> > > There have been a couple of contentious patches gathering attention
> recently
> > > about how to handle the case where a block device mapping supersedes
> flavor
> > > information. Before moving forward on either of those I think we
> should have a
> > > discussion about how best to handle the general case, and how to
> handle any
> > > changes in behavior that results from that.
> > >
> > > There are two cases presented:
> > >
> > > 1. A user boots an instance using a Cinder volume as a root disk,
> however the
> > > flavor specifies root_gb = x where x > 0. The current behavior in Nova
> is that the
> > > scheduler is given the flavor root_gb info to take into account during
> scheduling.
> > > This may disqualify some hosts from receiving the instance even though
> that disk
> > > space  is not necessary because the root disk is a remote volume.
> > > https://review.openstack.org/#/c/200870/
> > >
> > > 2. A user boots an instance and uses the block device mapping
> parameters to
> > > specify a swap or 

Re: [openstack-dev] [all] versioning the api-ref?

2016-08-26 Thread Anne Gentle
On Fri, Aug 26, 2016 at 7:46 AM, Bashmakov, Alexander <
alexander.bashma...@intel.com> wrote:

> Any more feedback on this?
>

Hi, I've added a comment on the review. For now, inline text descriptions
are best for the context of what you're adding in that particular place.

Anne


>
> > On Aug 18, 2016, at 10:30 AM, Bashmakov, Alexander <
> alexander.bashma...@intel.com> wrote:
> >
> > Concrete example of an api-ref difference between Mitaka and Newton:
> > https://review.openstack.org/#/c/356693/1/api-ref/source/v2/
> images-parameters.yaml
> >
> > -Original Message-
> > From: Sean Dague [mailto:s...@dague.net]
> > Sent: Thursday, August 18, 2016 10:20 AM
> > To: Nikhil Komawar ; OpenStack Development
> Mailing List (not for usage questions) 
> > Subject: Re: [openstack-dev] [all] versioning the api-ref?
> >
> >> On 08/18/2016 11:57 AM, Nikhil Komawar wrote:
> >> I guess the intent was to indicate the need for indicating the micro
> >> or in case of Glance minor version bump when required.
> >>
> >> The API isn't drastically different, there are new and old elements as
> >> shown in the Nova api ref linked.
> >
> > Right, so the point is that it should all be describable in a single
> document. It's like the fact that when you go to python API docs you get
> things like - https://docs.python.org/2/library/wsgiref.html
> >
> > "New in version 2.5."
> >
> > Perhaps if there is a concrete example of the expected differences
> between what would be in the mitaka tree vs. newton tree was can figure out
> an appropriate way to express that in api-ref.
> >
> >-Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-26 Thread Bashmakov, Alexander
Any more feedback on this?

> On Aug 18, 2016, at 10:30 AM, Bashmakov, Alexander 
>  wrote:
> 
> Concrete example of an api-ref difference between Mitaka and Newton:
> https://review.openstack.org/#/c/356693/1/api-ref/source/v2/images-parameters.yaml
> 
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net] 
> Sent: Thursday, August 18, 2016 10:20 AM
> To: Nikhil Komawar ; OpenStack Development Mailing 
> List (not for usage questions) 
> Subject: Re: [openstack-dev] [all] versioning the api-ref?
> 
>> On 08/18/2016 11:57 AM, Nikhil Komawar wrote:
>> I guess the intent was to indicate the need for indicating the micro 
>> or in case of Glance minor version bump when required.
>> 
>> The API isn't drastically different, there are new and old elements as 
>> shown in the Nova api ref linked.
> 
> Right, so the point is that it should all be describable in a single 
> document. It's like the fact that when you go to python API docs you get 
> things like - https://docs.python.org/2/library/wsgiref.html
> 
> "New in version 2.5."
> 
> Perhaps if there is a concrete example of the expected differences between 
> what would be in the mitaka tree vs. newton tree was can figure out an 
> appropriate way to express that in api-ref.
> 
>-Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] stable/newton branching schedule

2016-08-26 Thread Doug Hellmann
I plan to create the stable/newton branches for non-client libraries on
Monday based on the most recently tagged versions according to
the deliverable files in openstack/releases. If you *know* you are going
to need a bug fix release and want me to hold off, please speak up
before Monday morning US Eastern time.

We will be creating the stable branches for client libraries as we tag
them for the milestone-3 deadline next week. Again, if this poses any
known issues please let me know.

We will wait to create server branches until the RC1 tag, as usual.

Independent projects that want stable branches should request them based
on an existing version tag. Please either reply with a follow-up to this
message or find someone on IRC in #openstack-release.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Andrew Laski


On Fri, Aug 26, 2016, at 03:44 AM, kostiantyn.volenbovs...@swisscom.com
wrote:
> Hi, 
> option 1 (=that's what patches suggest) sounds totally fine.
> Option 3 > Allow block device mappings, when present, to mostly determine
> instance  packing 
> sounds like option 1+additional logic (=keyword 'mostly') 
> I think I miss to understand the part of 'undermining the purpose of the
> flavor'
> Why new behavior might require one more parameter to limit number of
> instances of host? 
> Isn't it that those VMs will be under control of other flavor
> constraints, such as CPU and RAM anyway and those will be the ones
> controlling 'instance packing'?

Yes it is possible that CPU and RAM could be controlling instance
packing. But my understanding is that since those are often
oversubscribed while disk is not that it's actually the disk amounts
that control the packing on some environments.  But that is a sub option
here, just document that disk amounts should not be used to determine
flavor packing on hosts and instead CPU and RAM must be used.

> Does option 3 covers In case someone relied on eg. flavor root disk for
> disk volume booted from volume - and now instance packing will change
> once patches are implemented?

That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
swap/ephemeral=0 the deployer is stating that they want only 4 instances
on that host. If there is CPU and RAM oversubscription enabled then by
using volumes a user could end up with more than 4 instances on that
host. So a max_instances=4 setting could solve that. However I don't
like the idea of adding a new config, and I think it's too simplistic to
cover more complex use cases. But it's an option.

> 
> BR, 
> Konstantin
> 
> > -Original Message-
> > From: Andrew Laski [mailto:and...@lascii.com]
> > Sent: Thursday, August 25, 2016 10:20 PM
> > To: openstack-dev@lists.openstack.org
> > Cc: openstack-operat...@lists.openstack.org
> > Subject: [Openstack-operators] [Nova] Reconciling flavors and block device
> > mappings
> > 
> > Cross posting to gather some operator feedback.
> > 
> > There have been a couple of contentious patches gathering attention recently
> > about how to handle the case where a block device mapping supersedes flavor
> > information. Before moving forward on either of those I think we should 
> > have a
> > discussion about how best to handle the general case, and how to handle any
> > changes in behavior that results from that.
> > 
> > There are two cases presented:
> > 
> > 1. A user boots an instance using a Cinder volume as a root disk, however 
> > the
> > flavor specifies root_gb = x where x > 0. The current behavior in Nova is 
> > that the
> > scheduler is given the flavor root_gb info to take into account during 
> > scheduling.
> > This may disqualify some hosts from receiving the instance even though that 
> > disk
> > space  is not necessary because the root disk is a remote volume.
> > https://review.openstack.org/#/c/200870/
> > 
> > 2. A user boots an instance and uses the block device mapping parameters to
> > specify a swap or ephemeral disk size that is less than specified on the 
> > flavor.
> > This leads to the same problem as above, the scheduler is provided 
> > information
> > that doesn't match the actual disk space to be consumed.
> > https://review.openstack.org/#/c/352522/
> > 
> > Now the issue: while it's easy enough to provide proper information to the
> > scheduler on what the actual disk consumption will be when using block 
> > device
> > mappings that undermines one of the purposes of flavors which is to control
> > instance packing on hosts. So the outstanding question is to what extent 
> > should
> > users have the ability to use block device mappings to bypass flavor 
> > constraints?
> > 
> > One other thing to note is that while a flavor constrains how much local 
> > disk is
> > used it does not constrain volume size at all. So a user can specify an
> > ephemeral/swap disk <= to what the flavor provides but can have an arbitrary
> > sized root disk if it's a remote volume.
> > 
> > Some possibilities:
> > 
> > Completely allow block device mappings, when present, to determine instance
> > packing. This is what the patches above propose and there's a strong desire 
> > for
> > this behavior from some folks. But changes how many instances may fit on a
> > host which could be undesirable to some.
> > 
> > Keep the status quo. It's clear that is undesirable based on the bug 
> > reports and
> > proposed patches above.
> > 
> > Allow block device mappings, when present, to mostly determine instance
> > packing. By that I mean that the scheduler only takes into account local 
> > disk that
> > would be consumed, but we add additional configuration to Nova which limits
> > the number of instance that can be placed on a host. This is a compromise
> > solution but I fear that a single int 

Re: [openstack-dev] [nova] VM console for VMware instances

2016-08-26 Thread Andrew Laski


On Fri, Aug 26, 2016, at 04:16 AM, Radoslav Gerganov wrote:
> On 25.08.2016 18:25, Andrew Laski wrote:
> > Is there a reason this has not been proposed to the Nova project, or
> > have I missed that? I looked for a proposal and did not see one.
> > 
> 
> The main reason I developed this out of tree is that reviewing patches
> in Nova takes forever.  For example this patch[1] that propose changes
> in this part of the code base has been in review for 2 years.

It's true that sometimes a patch gets overlooked for a long period of
time. If you notice that happening please bring it to our attention in
the #openstack-nova channel or bring it up in open discussion in a
weekly nova meeting.

> 
> > I see that there's support in Nova and python-novaclient for this
> > feature, but the actual proxy is not in the Nova tree. In situations
> > like this, where there's in-tree code to support an out of tree feature,
> > we typically deprecate and remove that code unless there's a plan to
> > move all of the components into the project. 
> 
> I don't think this is the case for console proxies.  The RDP console
> proxy is also developed out of tree, in C++[2].  We can't expect that
> all vendors will commit their proxy implementations in the Nova tree
> for various technical reasons.  In the VMware case, there are no
> technical reasons which prevent putting mksproxy in the Nova tree, I
> decided to start its development out of tree only because reviewing
> patches in Nova takes forever.

Okay. It's possible that there's a different policy in place for console
proxies that I am not aware of, and I don't see anything like that
documented in our docs.

A console proxy written in a different language is a special case where
we're not going to include that in tree. But as much as possible we like
to ensure that Nova services all adhere to the same guidelines and
standards and we've found that having these components in tree has
worked to allow us to do so. For example a major concern is upgrade
support and if we change the API that Nova uses to communicate with its
console proxies we can ensure compatibility with in-tree proxies but may
break a third party proxy and not be able to support those users.

> 
> > Is there a plan to move this proxy into Nova?
> > 
> 
> I will propose adding mksproxy in the Nova tree for the next release
> and if it is accepted, I will deprecate nova-mksproxy.

Thanks.

> 
> [1] https://review.openstack.org/#/c/115483/
> [2] https://cloudbase.it/freerdp-html5-proxy-windows/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-26 Thread Erlon Cruz
He is mentioning about the Cinder side: https://review.openstack.org/#
/c/147186/

On Fri, Aug 26, 2016 at 7:01 AM, Jordan Pittier 
wrote:

>
>
> On Thu, Aug 25, 2016 at 7:06 PM, Ben Swartzlander 
> wrote:
>
>> Originally the NFS driver did support snapshots, but it was implemented
>> by just 'cp'ing the file containing the raw bits. This works fine (if
>> inefficiently) for unattached volumes, but if you do this on an attached
>> volume the snapshot won't be crash consistent at all.
>>
>> It was decided that we could do better for attached volumes by switching
>> to qcow2 and relying on nova to perform the snapshots. Based on this, the
>> bad snapshot implementation was removed.
>>
>> However, for a variety of reasons the nova-assisted snapshot
>> implementation has remained unmerged for 2+ years and the NFS driver has
>> been an exception to the rules for that whole time.
>>
> I am not sure to understand what you mean by "the nova-assisted snapshot
> implementation has remained unmerged for 2+ years". It looks merged to me
> [1] and several Cinder drivers dependent on it as far as I know.
>
> [1]: http://developer.openstack.org/api-ref-compute-
> v2.1.html#os-assisted-volume-snapshots-v2.1
>
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] pycadf 2.4.0 release (newton)

2016-08-26 Thread no-reply
We are mirthful to announce the release of:

pycadf 2.4.0: CADF Library

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/pycadf

With package available at:

https://pypi.python.org/pypi/pycadf

Please report issues through launchpad:

https://bugs.launchpad.net/pycadf

For more details, please see below.

Changes in pycadf 2.3.0..2.4.0
--

d94f1b4 Updated from global requirements
e64911f Remove discover from test-requirements
92c134f Don't include openstack/common in flake8 exclude list
cad88bc Updated from global requirements
525bb1f Fix order of arguments in assertEqual
f98ce8e Updated from global requirements
cd574b6 Updated from global requirements
1794b9d Updated from global requirements
8ce2dff Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt   |  2 +-
test-requirements.txt  |  5 ++---
tox.ini|  2 +-
4 files changed, 26 insertions(+), 27 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2518ec2..987b277 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4 +4 @@
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 7f52a2b..5b340e5 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9,2 +9 @@ coverage>=3.6 # Apache-2.0
-discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -18 +17 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Chris Dent

On Thu, 25 Aug 2016, Sylvain Bauza wrote:

Of course, long-term, we could try to see how to have composite flavors for 
helping users to not create a whole handful of flavors for quite the same 
user requests, but that would still be flavors (or the name for saying a 
flavor composition).


long-term flavors should be a piece of UI furniture that is present in a
human-oriented-non-nova UI/API that provides raw information to the
computers-talking-to-computers API that is provided by nova.

But that's very long term.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Chris Dent

On Thu, 25 Aug 2016, Andrew Laski wrote:


Allow block device mappings, when present, to mostly determine instance
packing. By that I mean that the scheduler only takes into account local
disk that would be consumed, but we add additional configuration to Nova
which limits the number of instance that can be placed on a host. This
is a compromise solution but I fear that a single int value does not
meet the needs of deployers wishing to limit instances on a host. They
want it to take into account cpu allocations and ram and disk, in short
a flavor :)


When you say "add additional configuration" do you mean "add more
things to nova.conf"? If so, then please don't do that. There is far
too much of that.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-26 Thread Ivan Berezovskiy
+1, great job!

2016-08-26 10:33 GMT+03:00 Bogdan Dobrelya :

> +1
>
> On 25.08.2016 21:08, Stanislaw Bogatkin wrote:
> > +1
> >
> > On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko
> > > wrote:
> >
> > +1
> >
> > On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko
> > > wrote:
> >
> > +1
> >
> >
> > /sv
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >  unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev  openstack-dev>
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >  unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> >
> >
> >
> >
> > --
> > with best regards,
> > Stan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis 

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-26 Thread Jordan Pittier
On Thu, Aug 25, 2016 at 7:06 PM, Ben Swartzlander 
wrote:

> Originally the NFS driver did support snapshots, but it was implemented by
> just 'cp'ing the file containing the raw bits. This works fine (if
> inefficiently) for unattached volumes, but if you do this on an attached
> volume the snapshot won't be crash consistent at all.
>
> It was decided that we could do better for attached volumes by switching
> to qcow2 and relying on nova to perform the snapshots. Based on this, the
> bad snapshot implementation was removed.
>
> However, for a variety of reasons the nova-assisted snapshot
> implementation has remained unmerged for 2+ years and the NFS driver has
> been an exception to the rules for that whole time.
>
I am not sure to understand what you mean by "the nova-assisted snapshot
implementation has remained unmerged for 2+ years". It looks merged to me
[1] and several Cinder drivers dependent on it as far as I know.

[1]:
http://developer.openstack.org/api-ref-compute-v2.1.html#os-assisted-volume-snapshots-v2.1

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Meeting minutes for IRC meeting Aug. 26 2016

2016-08-26 Thread hu . zhijiang
Minutes:
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-08-26-08.00.html
 

Minutes (text): 
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-08-26-08.00.txt
 

Log:
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-08-26-08.00.log.html
 






B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] New bug tagging policy

2016-08-26 Thread Julie Pichon
Hi Steve,

On 25 August 2016 at 23:41, Steve Baker  wrote:
> On 25/08/16 22:30, Julie Pichon wrote:
>>
>> Hi folks,
>>
>> The bug tagging proposal has merged, behold the new policy:
>>
>>
>> http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html
>>
>> TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
>> let's use a consistent list of Launchpad tags where they make sense in
>> order to help understand which area(s) are affected. The tags get
>> autocompleted by Launchpad (or will be soon).
>>
>>
>> There is one remaining action to create the missing tags: I don't have
>> bug wrangling permissions on the TripleO project so, if someone with
>> the appropriate permissions could update the list [1] to match the
>> policy I would appreciate it. Should I be deemed trustworthy enough
>> I'm just as happy to do it myself and help out with the occasional
>> bout of triaging as well.
>>
>> Thanks,
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>>
> I'm not seeing any tag appropriate for the configuration agent projects
> os-collect-config, os-apply-config, os-refresh-config. Is it possible to add
> a tag like config-agent?

Totally! That list was a start, if you or anyone notices anything
missing feel free to propose a patch against
http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst
.

Thanks,

Julie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM console for VMware instances

2016-08-26 Thread Radoslav Gerganov
On 25.08.2016 18:25, Andrew Laski wrote:
> Is there a reason this has not been proposed to the Nova project, or
> have I missed that? I looked for a proposal and did not see one.
> 

The main reason I developed this out of tree is that reviewing patches
in Nova takes forever.  For example this patch[1] that propose changes
in this part of the code base has been in review for 2 years.

> I see that there's support in Nova and python-novaclient for this
> feature, but the actual proxy is not in the Nova tree. In situations
> like this, where there's in-tree code to support an out of tree feature,
> we typically deprecate and remove that code unless there's a plan to
> move all of the components into the project. 

I don't think this is the case for console proxies.  The RDP console
proxy is also developed out of tree, in C++[2].  We can't expect that
all vendors will commit their proxy implementations in the Nova tree
for various technical reasons.  In the VMware case, there are no
technical reasons which prevent putting mksproxy in the Nova tree, I
decided to start its development out of tree only because reviewing
patches in Nova takes forever.

> Is there a plan to move this proxy into Nova?
> 

I will propose adding mksproxy in the Nova tree for the next release
and if it is accepted, I will deprecate nova-mksproxy.

[1] https://review.openstack.org/#/c/115483/
[2] https://cloudbase.it/freerdp-html5-proxy-windows/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Kostiantyn.Volenbovskyi
Hi, 
option 1 (=that's what patches suggest) sounds totally fine.
Option 3 > Allow block device mappings, when present, to mostly determine 
instance  packing 
sounds like option 1+additional logic (=keyword 'mostly') 
I think I miss to understand the part of 'undermining the purpose of the flavor'
Why new behavior might require one more parameter to limit number of instances 
of host? 
Isn't it that those VMs will be under control of other flavor constraints, such 
as CPU and RAM anyway and those will be the ones controlling 'instance packing'?
Does option 3 covers In case someone relied on eg. flavor root disk for disk 
volume booted from volume - and now instance packing will change once patches 
are implemented?

BR, 
Konstantin

> -Original Message-
> From: Andrew Laski [mailto:and...@lascii.com]
> Sent: Thursday, August 25, 2016 10:20 PM
> To: openstack-dev@lists.openstack.org
> Cc: openstack-operat...@lists.openstack.org
> Subject: [Openstack-operators] [Nova] Reconciling flavors and block device
> mappings
> 
> Cross posting to gather some operator feedback.
> 
> There have been a couple of contentious patches gathering attention recently
> about how to handle the case where a block device mapping supersedes flavor
> information. Before moving forward on either of those I think we should have a
> discussion about how best to handle the general case, and how to handle any
> changes in behavior that results from that.
> 
> There are two cases presented:
> 
> 1. A user boots an instance using a Cinder volume as a root disk, however the
> flavor specifies root_gb = x where x > 0. The current behavior in Nova is 
> that the
> scheduler is given the flavor root_gb info to take into account during 
> scheduling.
> This may disqualify some hosts from receiving the instance even though that 
> disk
> space  is not necessary because the root disk is a remote volume.
> https://review.openstack.org/#/c/200870/
> 
> 2. A user boots an instance and uses the block device mapping parameters to
> specify a swap or ephemeral disk size that is less than specified on the 
> flavor.
> This leads to the same problem as above, the scheduler is provided information
> that doesn't match the actual disk space to be consumed.
> https://review.openstack.org/#/c/352522/
> 
> Now the issue: while it's easy enough to provide proper information to the
> scheduler on what the actual disk consumption will be when using block device
> mappings that undermines one of the purposes of flavors which is to control
> instance packing on hosts. So the outstanding question is to what extent 
> should
> users have the ability to use block device mappings to bypass flavor 
> constraints?
> 
> One other thing to note is that while a flavor constrains how much local disk 
> is
> used it does not constrain volume size at all. So a user can specify an
> ephemeral/swap disk <= to what the flavor provides but can have an arbitrary
> sized root disk if it's a remote volume.
> 
> Some possibilities:
> 
> Completely allow block device mappings, when present, to determine instance
> packing. This is what the patches above propose and there's a strong desire 
> for
> this behavior from some folks. But changes how many instances may fit on a
> host which could be undesirable to some.
> 
> Keep the status quo. It's clear that is undesirable based on the bug reports 
> and
> proposed patches above.
> 
> Allow block device mappings, when present, to mostly determine instance
> packing. By that I mean that the scheduler only takes into account local disk 
> that
> would be consumed, but we add additional configuration to Nova which limits
> the number of instance that can be placed on a host. This is a compromise
> solution but I fear that a single int value does not meet the needs of 
> deployers
> wishing to limit instances on a host. They want it to take into account cpu
> allocations and ram and disk, in short a flavor :)
> 
> And of course there may be some other unconsidered solution. That's where
> you, dear reader, come in.
> 
> Thoughts?
> 
> -Andrew
> 
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-26 Thread Bogdan Dobrelya
+1

On 25.08.2016 21:08, Stanislaw Bogatkin wrote:
> +1
> 
> On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko
> > wrote:
> 
> +1
> 
> On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko
> > wrote:
> 
> +1
> 
> 
> /sv
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> -- 
> with best regards,
> Stan.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev