Re: [openstack-dev] [ironic] Midcycle summary part 1/6

2016-02-16 Thread Kenny Johnston
On Feb 16, 2016 11:32 PM, "Jim Rollenhagen"  wrote:
>
> Hi all,
>
> As our midcycle is virtual and split into 6 "sessions" for the sake of
> timezones, we'll be sending a brief summary of each session so that
> folks can catch up before the next one. All of this info should be on
> the etherpad as well.
>
> Session 1/6 was February 16, 1500-2000 UTC.
>
> * We talked about Mitaka priorities and progress, and plans for the
>   remainder of the cycle. Also reminded of timelines.
>   * Feb 29 is the client library deadline; we'll need an ironicclient
> release before then.
>   * Priorities for the remainder of the cycle are laid out in the
> etherpad. Top priorities are network isolation work and manual
> cleaning.
>
> * We discussed the effectiveness of setting cycle priorities.
>   * Consensus was that it was effective overall.
>   * Difficult to keep priorities in mind for reviewers
>   * jroll is going to work on a priorities dashboard, listing progress,
> outstanding patches/topics in gerrit.
>   * Concern there's too many specs for all of us to properly review.
> Deva is going to lead the work on setting up a timebox in our
> meeting to triage a few specs, as far as if it's worth looking at in
> the short term.
>   * Deva and jroll to chat with product working group to see if we can
> mutually help each other, in an attempt to make sure we're working
> on the right things.
+1

We are meeting this week for the PWG mid-cycle. Let us know how we can
help.

>   * We all need to try not to spread ourselves to thin, by assigning
> ourselves to less chunks of work. We also decided that each priority
> should have a group of people invested in it, rather than one, that
> can work together to speed up progress on them.
>
> * We brainstormed on our plans for Newton, and made an exhaustive list
>   of things we'd like to do. We won't be able to do them all, but it's a
>   good start on planning our priorities for Newton and our summit
sessions.
>
> * We made a list of contentious topics that we absolutely want to make
>   sure get discussed during this midcycle. This includes the
>   claims/filter API, boot from volume, and VLAN aware baremetal
>   instances.
>   * jroll to lay out the schedule for these.
>
> * Last, we talked about our gate and how to make it less terrible.
>   * In general, the less voting jobs we have, the less opportunity for
> one to fail at random.
>   * Going to make postgres non-voting.
>   * Going to remove bash ramdisk support and kill that job.
>   * Still working on tinyipa to speed things up.
> * This also allows us to do more tests in parallel.
>   * Discussed writing more tempest deploy tests to test individual
> features, rather than separate jobs.
>   * Also discussed things we want to test better once our gate is
> healthier.
>
> I think that's it from session 1.
>
> As a note, the asterisk setup that Infra provided was *fantastic*, and
> the virtual-ness of this midcycle is going better than I ever expected.
> Thanks again to the infra team for all that you do for us. <3
>
> See you all in the rest of the midcycle! :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Backup port info to restore the flow rules

2016-02-16 Thread Jian Wen
Hello,

If we restart OvS/ovs-agent when one or more of Neutron, MySQL and
RabbitMQ is not available, the flow rules in OvS will be gone. If
Neutron/MySQL/RabbitMQ doesn't become available in time, the VMs
will lose their network connections. It's not easy for an
operations engineer to manually restore the flow rules. An
operations engineer working under pressure at 2 a.m. will make
mistakes.

We can backup the ports info to a local file. In case of emergency
the ovs-agent can use it to restore the flow rules. What do you
think of this feature?

Related bugs:
Restarting neutron openvswitch agent causes network hiccup by throwing
away all flows
https://bugs.launchpad.net/neutron/+bug/1383674

Restarting OVS agent drops VMs traffic when using VLAN provider bridges
https://bugs.launchpad.net/neutron/+bug/1514056

After restarting an ovs agent, it still drops useful flows if the
neutron server is busy/down
https://bugs.launchpad.net/neutron/+bug/1515075

Ovs agent loses OpenFlow rules if OVS gets restarted while Neutron is
disconnected from SQL
https://bugs.launchpad.net/neutron/+bug/1531210


-- 
Best,

Jian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dnsmasq]DNS redirection by dnsmasq

2016-02-16 Thread Zhi Chang
hi Carl. Thanks for your reply. 


DNS redirection is our customer's needs. Customer has their own CDN. They want 
to save traffic in CDN so that they can cost less money. 
So they let us hijack some domain names. We used dnsmasq "--cname" option to 
satisfy their needs. So I think that maybe we can add
"cnames" into subnet's attributes.


BTW, I'm not quite understand about "--cname is limited to target names known 
by dnsmasq itself". Could you give me some explanation about it?


Thanks 
Zhi Chang 
 
 
-- Original --
From:  "Carl Baldwin";
Date:  Wed, Feb 17, 2016 05:54 AM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron][dnsmasq]DNS redirection by dnsmasq

 
What would be the motivation for this?  Could you give some examples
of what you'd use it for?  Keep in mind that --cname is limited to
target names known by dnsmasq itself.

Carl

On Mon, Feb 15, 2016 at 2:13 AM, Zhi Chang  wrote:
> hi, guys.
>  Most of us know about DNS redirection. I think that we can implement
> DNS redirection in Neutron. In dnsmasq, there is a option named
> "--cname"(http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html), we
> can use this option to implement this function.
>
>  What about your idea?
>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle summary part 1/6

2016-02-16 Thread James E. Blair
Jim Rollenhagen  writes:

> As a note, the asterisk setup that Infra provided was *fantastic*, and
> the virtual-ness of this midcycle is going better than I ever expected.
> Thanks again to the infra team for all that you do for us. <3

That's great to hear!

I'm looking forward to hearing what worked for you and what could be
improved.  I'm becoming a big fan of virtual sprints, but we've only
done a few of them so far.

I like the way you've set up the schedule and discrete sessions.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-16 Thread Cheng, Yingxin
To better illustrate the differences between shared-state, resource-provider 
and legacy scheduler, I've drew 3 simplified pictures [1] in emphasizing the 
location of resource view, the location of claim and resource consumption, and 
the resource update/refresh pattern in three kinds of schedulers. Hoping I'm 
correct in the "resource-provider scheduler" part.


A point of view from my analysis in comparing three schedulers (before real 
experiment):
1. Performance: The performance bottlehead of resource-provider and legacy 
scheduler is from the centralized db and scheduler cache refreshing. It can be 
alleviated by changing to a stand-alone high performance database. And the 
cache refreshing is designed to be replaced by to direct SQL queries according 
to resource-provider scheduler spec [2]. The performance bottlehead of 
shared-state scheduler may come from the overwhelming update messages, it can 
also be alleviated by changing to stand-alone distributed message queue and by 
using the "MessagePipe" to merge messages.
2. Final decision accuracy: I think the accuracy of the final decision are high 
in all three schedulers, because until now the consistent resource view and the 
final resource consumption with claims are all in the same place. It's resource 
trackers in shared-state scheduler and legacy scheduler, and it's the 
resource-provider db in resource-provider scheduler.
3. Scheduler decision accuracy: IMO the order of accuracy of a single schedule 
decision is resource-provider > shared-state >> legacy scheduler. The 
resource-provider scheduler can get the accurate resource view directly from 
db. Shared-state scheduler is getting the most accurate resource view by 
constantly collecting updates from resource trackers and by tracking the 
scheduler claims from schedulers to RTs. Legacy scheduler's decision is the 
worst because it doesn't track its claims and get resource views from compute 
nodes records which are not that accurate.
4. Design goal difference:
The fundamental design goal of the two new schedulers is different. Copy my 
views from [2], I think it is the choice between "the loose distributed 
consistency with retries" and "the strict centralized consistency with locks".


As can be seen in the illustrations [1], the main compatibility issue between 
shared-state and resource-provider scheduler is caused by the different 
location of claim/consumption and the assumed consistent resource view. IMO 
unless the claims are allowed to happen in both places(resource tracker and 
resource-provider db), it seems difficult to make shared-state and 
resource-provider scheduler work together.


[1] 
https://docs.google.com/document/d/1iNUkmnfEH3bJHDSenmwE4A1Sgm3vnqa6oZJWtTumAkw/edit?usp=sharing
[2] https://review.openstack.org/#/c/271823/


Regards,
-Yingxin

From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Monday, February 15, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova] A prototype implementation towards the 
"shared state scheduler"


Le 15/02/2016 10:48, Cheng, Yingxin a écrit :
Thanks Sylvain,

1. The below ideas will be extended to a spec ASAP.

Nice, looking forward to it then :-)


2. Thanks for providing concerns I've not thought it yet, they will be in the 
spec soon.

3. Let me copy my thoughts from another thread about the integration with 
resource-provider:
The idea is about "Only compute node knows its own final compute-node resource 
view" or "The accurate resource view only exists at the place where it is 
actually consumed." I.e., The incremental updates can only come from the actual 
"consumption" action, no matter where it is(e.g. compute node, storage service, 
network service, etc.). Borrow the terms from resource-provider, compute nodes 
can maintain its accurate version of "compute-node-inventory" cache, and can 
send incremental updates because it actually consumes compute resources, 
furthermore, storage service can also maintain an accurate version of 
"storage-inventory" cache and send incremental updates if it also consumes 
storage resources. If there are central services in charge of consuming all the 
resources, the accurate cache and updates must come from them.


That is one of the things I'd like to see in your spec, and how you could 
interact with the new model.
Thanks,
-Sylvain




Regards,
-Yingxin

From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Monday, February 15, 2016 5:28 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova] A prototype implementation towards the 
"shared state scheduler"


Le 15/02/2016 06:21, Cheng, Yingxin a écrit :
Hi,

I've uploaded a prototype https://review.openstack.org/#/c/280047/ to testify 
its design goals in accuracy, performance, reliability and compatibility 

[openstack-dev] [ironic] Midcycle summary part 2/6

2016-02-16 Thread Jim Rollenhagen
Hi all,

As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.

Session 2/6 was February 17, -0400 UTC.

* We briefly went over what happened in session 1, to make sure everyone
  was up to speed and could ask questions.

* mrda graciously showed us the work he's been doing to fit ironic into
  the openstack-ansible work. We're very excited about it, of course. :)
  Work going forward includes integrating better with bifrost to share
  common code, as well as getting good CI going on it.

* The rest of the time was spent on group review on this patch for the
  Neutron integration work: https://review.openstack.org/#/c/206244/
  A few bugs were found, one of them being pretty concerning:
  https://bugs.launchpad.net/ironic/+bug/1546371
  That bug is related to both code in that patch and code that has
  already landed in tree, so there will be some work to do to unwind
  that.

Thanks to all that came to this session. See everyone at 1500 for the
next one :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Nominating Masahito for core

2016-02-16 Thread Anusha Ramineni
+1

Best Regards,
Anusha

On 17 February 2016 at 00:59, Peter Balland  wrote:

> +1
>
> From: Tim Hinrichs 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, February 16, 2016 at 11:15 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Congress] Nominating Masahito for core
>
> Hi all,
>
> I'm writing to nominate Masahito Muroi for the Congress core team.  He's
> been a consistent contributor for the entirety of Liberty and Mitaka, both
> in terms of code contributions and reviews.  In addition to volunteering
> for bug fixes and blueprints, he initiated and carried out the design and
> implementation of a new class of datasource driver that allows external
> datasources to push data into Congress.  He has also been instrumental in
> migrating Congress to its new distributed architecture.
>
> Tim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] deprecating Tempest stress framework

2016-02-16 Thread GHANSHYAM MANN
+1 Sounds good to separate those to different repo.


Regards
Ghanshyam Mann


On Wed, Feb 17, 2016 at 4:55 AM, Matthew Treinish  wrote:
> On Fri, Feb 12, 2016 at 10:21:53AM +0100, Koderer, Marc wrote:
>> I know that there are folks around that are using it.
>> +1 to move it to a separate repo.
>
> That sounds fine to me, let's mark the stress runner as deprecated and before
> we remove it someone can spin up a separate repo that owns the stress runner
> moving forward. But, I don't think we should hold up marking the deprecation
> until that exists. Let's move forward on the deprecation to give users enough
> time to prepare that the in-tree version is going away.
>
> -Matt Treinish
>
>>
>> Regards
>> Marc
>>
>> > On 11 Feb 2016, at 13:59, Daniel Mellado  
>> > wrote:
>> >
>> > +1 to that, it was my 2nd to-be-deprecated after javelin ;)
>> >
>> > El 11/02/16 a las 12:47, Sean Dague escribió:
>> >> In order to keep Tempest healthy I feel like it's time to prune things
>> >> that are outside of the core mission, especially when there are other
>> >> options out there.
>> >>
>> >> The stress test framework in tempest is one of those. It builds on other
>> >> things in Tempest, but isn't core to it.
>> >>
>> >> I'd propose that becomes deprecated now, and removed in Newton. If there
>> >> are folks that would like to carry it on from there, I think we should
>> >> spin it into a dedicated repository and just have it require tempest.
>> >>
>> >>-Sean
>> >>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Midcycle summary part 1/6

2016-02-16 Thread Jim Rollenhagen
Hi all,

As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.

Session 1/6 was February 16, 1500-2000 UTC.

* We talked about Mitaka priorities and progress, and plans for the
  remainder of the cycle. Also reminded of timelines.
  * Feb 29 is the client library deadline; we'll need an ironicclient
release before then.
  * Priorities for the remainder of the cycle are laid out in the
etherpad. Top priorities are network isolation work and manual
cleaning.

* We discussed the effectiveness of setting cycle priorities.
  * Consensus was that it was effective overall.
  * Difficult to keep priorities in mind for reviewers
  * jroll is going to work on a priorities dashboard, listing progress,
outstanding patches/topics in gerrit.
  * Concern there's too many specs for all of us to properly review.
Deva is going to lead the work on setting up a timebox in our
meeting to triage a few specs, as far as if it's worth looking at in
the short term.
  * Deva and jroll to chat with product working group to see if we can
mutually help each other, in an attempt to make sure we're working
on the right things.
  * We all need to try not to spread ourselves to thin, by assigning
ourselves to less chunks of work. We also decided that each priority
should have a group of people invested in it, rather than one, that
can work together to speed up progress on them.

* We brainstormed on our plans for Newton, and made an exhaustive list
  of things we'd like to do. We won't be able to do them all, but it's a
  good start on planning our priorities for Newton and our summit sessions.

* We made a list of contentious topics that we absolutely want to make
  sure get discussed during this midcycle. This includes the
  claims/filter API, boot from volume, and VLAN aware baremetal
  instances.
  * jroll to lay out the schedule for these.

* Last, we talked about our gate and how to make it less terrible.
  * In general, the less voting jobs we have, the less opportunity for
one to fail at random.
  * Going to make postgres non-voting.
  * Going to remove bash ramdisk support and kill that job.
  * Still working on tinyipa to speed things up.
* This also allows us to do more tests in parallel.
  * Discussed writing more tempest deploy tests to test individual
features, rather than separate jobs.
  * Also discussed things we want to test better once our gate is
healthier.

I think that's it from session 1.

As a note, the asterisk setup that Infra provided was *fantastic*, and
the virtual-ness of this midcycle is going better than I ever expected.
Thanks again to the infra team for all that you do for us. <3

See you all in the rest of the midcycle! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-16 Thread Emilien Macchi


On 02/16/2016 05:09 PM, Hunter Haugen wrote:
> 
> 
> 
> 
> -Hunter
> 
> On Tue, Feb 16, 2016 at 11:44 AM, David Moreau Simard  > wrote:
> 
> On Tue, Feb 16, 2016 at 2:18 PM, Hunter Haugen
> > wrote:
> > The forge does verify that versions match semver 1.0.0, which states "a
> > pre-release version number MAY be denoted by appending an arbitrary 
> string
> > immediately following the patch version and a dash. The string MUST be
> > comprised of only alphanumerics plus dash [0-9A-Za-z-]" thus this tag 
> would
> > more appropriately be 8.0.0-beta1 (as per the given example on
> > http://semver.org/spec/v1.0.0.html).
> >
> > The forge doesn't care about tags, but if you have 8.0.0b1 in the 
> metadata
> > it will probably deny the publish.
> 
> Can we double check if that is indeed the case ?
> I feel it's important to keep the same version format as the rest of
> OpenStack.
> 
> 
>  ae% grep version metadata.json
>   "version": "8.0.0b1",
> ae% puppet module build
> Error: Invalid 'version' field in metadata.json: version string cannot
> be parsed as a valid Semantic Version
> Error: Try 'puppet help module build' for usage
> 
> So OpenStack doesn't follow semver? Afaik 8.0.0b1 doesn't even follow
> semver 2 http://semver.org/#spec-item-9


You can look http://releases.openstack.org/mitaka/index.html

We'll try to find a sember that works for Puppet, obviously.

> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-16 Thread Robert Collins
I suggest:
 - pin anything that moves
 - start being strict ourselves to prepare for moving
 - work with paramiko to help them move

Sadly Python doesn't have either-or dependencies as yet, so we're
going to be in the position of having to override pip for some time
during the migration process.

-Rob

On 15 February 2016 at 11:16, Davanum Srinivas  wrote:
> Hi,
>
> Short Story:
> pycryptodome if installed inadvertently will break several projects:
> Example : https://review.openstack.org/#/c/279926/
>
> Long Story:
> There's a new kid in town pycryptodome:
> https://github.com/Legrandin/pycryptodome
>
> Because pycrypto itself has not been maintained for a while:
> https://github.com/dlitz/pycrypto
>
> So folks like pysaml2 and paramiko are trying to switch over:
> https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfbde6b41ac63
> https://github.com/paramiko/paramiko/issues/637
>
> In fact pysaml2===4.0.3 has already switched over. So the requirements
> bot/script has been trying to alert us to this new dependency, you can
> see Nova fail.
> https://review.openstack.org/#/c/279926/
>
> Why does it fail? For example, the new library is strict about getting
> bytes for keys and has dropped some parameters in methods. for
> example:
> https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/PublicKey/RSA.py#L405
> https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.py#L499
>
> Another problem, if pycrypto gets installed last then things will
> work, if it pycryptodome gets installed last, things will fail. So we
> definitely cannot allow both in our global-requirements and
> upper-constraints. We can always try to pin stuff, but things will
> fail as there are a lot of jobs that do not honor upper-constraints.
> And things will fail in the field for Mitaka.
>
> Action:
> So what can we do? One possibility is to pin requirements and hope for
> the best. Another is to tolerate the install of either pycrypto or
> pycryptodome and test both combinations so we don't have to fight this
> battle.
>
> Example for Nova : https://review.openstack.org/#/c/279909/
> Example for Glance : https://review.openstack.org/#/c/280008/
> Example for Barbican : https://review.openstack.org/#/c/280014/
>
> What do you think?
>
> Thanks,
> Dims
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] More attention to PostgreSQL

2016-02-16 Thread Elisha, Moshe (Nokia - IL)
Hi,

We have more and more customers that want to run Mistral on top of PostgreSQL 
database (instead of MySQL).
I also know that PostgreSQL is important for some of our active contributors.

Can we add more attention to PostgreSQL? For example, add more gates (like 
gate-rally-dsvm-mistral-task and gate-mistral-devstack-dsvm) that will run on 
top of PostgreSQL as well.

What do you think?

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-16 Thread Hunter Haugen
-Hunter

On Tue, Feb 16, 2016 at 11:44 AM, David Moreau Simard 
wrote:

> On Tue, Feb 16, 2016 at 2:18 PM, Hunter Haugen 
> wrote:
> > The forge does verify that versions match semver 1.0.0, which states "a
> > pre-release version number MAY be denoted by appending an arbitrary
> string
> > immediately following the patch version and a dash. The string MUST be
> > comprised of only alphanumerics plus dash [0-9A-Za-z-]" thus this tag
> would
> > more appropriately be 8.0.0-beta1 (as per the given example on
> > http://semver.org/spec/v1.0.0.html).
> >
> > The forge doesn't care about tags, but if you have 8.0.0b1 in the
> metadata
> > it will probably deny the publish.
>
> Can we double check if that is indeed the case ?
> I feel it's important to keep the same version format as the rest of
> OpenStack.
>

 ae% grep version metadata.json
  "version": "8.0.0b1",
ae% puppet module build
Error: Invalid 'version' field in metadata.json: version string cannot be
parsed as a valid Semantic Version
Error: Try 'puppet help module build' for usage

So OpenStack doesn't follow semver? Afaik 8.0.0b1 doesn't even follow
semver 2 http://semver.org/#spec-item-9


> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] publish and update Gerrit dashboard link automatically

2016-02-16 Thread Carl Baldwin
Could this be done by creating a project dashboard [1]?  I think the
one thing that prevents using such a dashboard is that your script
generates a dashboard that crosses multiple projects.  So, we'd be
stuck with multiple dashboards, one per project.

The nature of your script is to create a new URL reflecting the
current state of things each time it is run.  But, it would be nice if
it were bookmark-able.  These seem to conflict.

Would it be possible have a way to create a URL which would return a
"307 Temporary Redirect" to the URL of the day?  It could be
bookmarked and redirect to the latest URL of the day.

Another idea is a page with a frame or something so that the permanent
URL stays in the browser bar.  I think I've seen web pages redirect
this way before.

Or, we could not do the fancy stuff and just have a link on a wiki or something.

No matter how it is done, there is the problem of where to host such a
page which can be automatically updated daily (or more often) by this
script.

Any thoughts from infra on this?

Carl

[1] 
https://gerrit-review.googlesource.com/Documentation/user-dashboards.html#project-dashboards

On Fri, Feb 12, 2016 at 10:12 AM, Rossella Sblendido
 wrote:
>
>
> On 02/12/2016 12:25 PM, Rossella Sblendido wrote:
>>
>> Hi all,
>>
>> it's hard sometimes for reviewers to filter reviews that are high
>> priority. In Neutron in this mail thread [1] we had the idea to create a
>> script for that. The script is now available in the Neutron repository
>> [2].
>> The script queries Launchpad and creates a file that can be used by
>> gerrit-dash-creator to display a dashboard listing patches that fix
>> critical/high bugs, that implement approved blueprint or feature
>> requests. This is how it looks like today [3].
>> For it to be really useful the dashboard link needs to be updated once a
>> day at least. Here I need your help. I'd like to publish the URL in a
>> public place and update it every day in an automated way. How can I do
>> that?
>>
>> thanks,
>>
>> Rossella
>>
>> [1]
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079816.html
>>
>> [2]
>>
>> https://github.com/openstack/neutron/blob/master/tools/milestone-review-dash.py
>>
>> [3] https://goo.gl/FSKTj9
>
>
> This last link is wrong, this is the right one [1] sorry.
>
> [1] https://goo.gl/Hb3vKu
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dnsmasq]DNS redirection by dnsmasq

2016-02-16 Thread Carl Baldwin
What would be the motivation for this?  Could you give some examples
of what you'd use it for?  Keep in mind that --cname is limited to
target names known by dnsmasq itself.

Carl

On Mon, Feb 15, 2016 at 2:13 AM, Zhi Chang  wrote:
> hi, guys.
>  Most of us know about DNS redirection. I think that we can implement
> DNS redirection in Neutron. In dnsmasq, there is a option named
> "--cname"(http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html), we
> can use this option to implement this function.
>
>  What about your idea?
>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-16 Thread Beliveau, Ludovic
I'm getting these nova tox errors now (from pip-missing-reqs):
Missing requirements:
nova/crypto.py:29 dist=pycrypto module=Crypto.PublicKey.RSA

I think requirements.txt should now include pycrypto ?  Or am I missing 
something.

Thanks,
/ludovic

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Sunday, February 14, 2016 5:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto 
vs pycryptodome

Hi,

Short Story:
pycryptodome if installed inadvertently will break several projects:
Example : https://review.openstack.org/#/c/279926/

Long Story:
There's a new kid in town pycryptodome:
https://github.com/Legrandin/pycryptodome

Because pycrypto itself has not been maintained for a while:
https://github.com/dlitz/pycrypto

So folks like pysaml2 and paramiko are trying to switch over:
https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfbde6b41ac63
https://github.com/paramiko/paramiko/issues/637

In fact pysaml2===4.0.3 has already switched over. So the requirements 
bot/script has been trying to alert us to this new dependency, you can see Nova 
fail.
https://review.openstack.org/#/c/279926/

Why does it fail? For example, the new library is strict about getting bytes 
for keys and has dropped some parameters in methods. for
example:
https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/PublicKey/RSA.py#L405
https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.py#L499

Another problem, if pycrypto gets installed last then things will work, if it 
pycryptodome gets installed last, things will fail. So we definitely cannot 
allow both in our global-requirements and upper-constraints. We can always try 
to pin stuff, but things will fail as there are a lot of jobs that do not honor 
upper-constraints.
And things will fail in the field for Mitaka.

Action:
So what can we do? One possibility is to pin requirements and hope for the 
best. Another is to tolerate the install of either pycrypto or pycryptodome and 
test both combinations so we don't have to fight this battle.

Example for Nova : https://review.openstack.org/#/c/279909/
Example for Glance : https://review.openstack.org/#/c/280008/
Example for Barbican : https://review.openstack.org/#/c/280014/

What do you think?

Thanks,
Dims


--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-16 Thread Carl Baldwin
On Mon, Feb 15, 2016 at 9:26 AM, Pavel Bondar  wrote:
> Your idea sounds workable to me. However I think a simpler way exists.

I'll be happy to review your migration for Mitaka which should be
totally out-of-band and leave both implementations intact.  As for the
details of the switch-over in Newton, I'll be happy to leave it up to
you.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral team meeting minutes

2016-02-16 Thread Renat Akhmerov
I will come :)

Renat Akhmerov
@ Mirantis Inc.



> On 15 Feb 2016, at 08:42, Nikolay Makhotkin  wrote:
> 
> Thank you for attending the meeting today! 
> 
> Next meeting is scheduled on 22 Feb. It is non-working day in Russia so 
> Renat, Anastasia and I very likely won't come to the meeting.  
> 
> Minutes:
> http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-02-15-16.00.html
>  
> 
>  Log:
> http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-02-15-16.00.log.html
>  
> 
> 
> 
> -- 
> Best Regards,
> Nikolay
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][glance] glance_store 0.11.0 release (mitaka)

2016-02-16 Thread no-reply
We are thrilled to announce the release of:

glance_store 0.11.0: OpenStack Image Service Store Library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/glance_store

With package available at:

https://pypi.python.org/pypi/glance_store

Please report issues through launchpad:

http://bugs.launchpad.net/glance-store

For more details, please see below.

0.11.0
^^

glance_store._drivers.gridfs


Deprecation Notes
*

* The gridfs driver has been removed from the tree. The environments
  using this driver that were not migrated will stop working after the
  upgrade.


Other Notes
***

* For years, */var/lib/glance/images* has been presented as the
  default dir for the filesystem store. It was not part of the default
  value until now. New deployments and ppl overriding config files
  should watch for this.

* Start using reno to manage release notes.

Changes in glance_store 0.10.0..0.11.0
--

17b7c9a Remove unused parameters from swift connection init
4c4f129 Sheepdog: fix image-download failure
d413f0d LOG.warn is deprecated in python3
d5c5f42 Updated from global requirements
36f770a Updated from global requirements
8677f92 Use url_for from keystoneclient in swift store
d2efe20 Remove deprecated  datastore_name, datacenter_path
5faadde Add backend tests from glance
df24e6d Fix some inconsistency in docstrings
730d363 Updated from global requirements
16f1b70 Change Swift zero-size chunk behaviour
563536d Sheepdog: fix upload failure in API v2
a174cdd Remove unnecessary re-raise of NotFound exception
9683e3b Updated from global requirements
819c7f9 Add signature verifier to backend drivers
fc876c9 Use oslo_utils.encodeutils.exception_to_unicode()
caed3b1 Fix default mutables for set_acls
0d9ef9f Deprecate unused Exceptions
3331ac6 Remove unnecessary auth module
f39cd49 Updated from global requirements
1bbb971 Deprecate the S3 driver
a3b15e6 Document supported drivers and maintainers
d34a6e7 Remove the gridfs driver
f5fd1bc Set documented default directory for filesystem
42697d8 Imported Translations from Zanata
3b97ab9 Updated from global requirements
399aec1 Swift store: do not send a 0 byte chunk
a0620e1 Store.get_size: handle HTTPException
120f25e Replace deprecated library function os.popen() with subprocess
4340317 Updated from global requirements
f5b323f Deprecated tox -downloadcache option removed
ec02a14 Add docs section to tox.ini
64c746d Replace assertEqual(None, *) with assertIsNone in tests
d0ba6dd Updated from global requirements
fe2812c Remove duplicate keys from dictionary
38e2e0c Remove unreachable code
a4ec8a2 Sheepdog: Change storelocation format
adafb67 Updated from global requirements
36469dd Add reno for release notes management in glance_store
e3f6e23 Put py34 first in the env order of tox
ab8af2d Updated from global requirements
815e463 Add list of supported stores to help
2660ea9 Add functional testing devstack gate hooks
5b2eec7 rbd driver cannot delete residual image from ceph in some cases

Diffstat (except docs and test files)
-

.gitignore |   3 +
glance_store/_drivers/cinder.py|   4 +-
glance_store/_drivers/filesystem.py|  64 +++--
glance_store/_drivers/gridfs.py| 225 
glance_store/_drivers/http.py  |  16 +-
glance_store/_drivers/rbd.py   |  70 +++--
glance_store/_drivers/s3.py|  49 ++--
glance_store/_drivers/sheepdog.py  |  97 ---
glance_store/_drivers/swift/store.py   |  63 +++--
glance_store/_drivers/vmware_datastore.py  |  77 ++
glance_store/backend.py|  28 +-
glance_store/capabilities.py   |   4 +-
glance_store/common/auth.py| 293 -
glance_store/common/utils.py   |  16 --
glance_store/driver.py |  38 +--
glance_store/exceptions.py |  34 ++-
.../es/LC_MESSAGES/glance_store-log-warning.po |  14 +-
.../fr/LC_MESSAGES/glance_store-log-warning.po |  14 +-
glance_store/locale/glance_store.pot   | 228 
releasenotes/notes/.placeholder|   0
.../remove-gridfs-driver-09286e27613b4353.yaml |   7 +
...-directory-for-filesystem-9b417a29416d3a94.yaml |   5 +
.../notes/start-using-reno-73ef709807e37b74.yaml   |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 278 +++
releasenotes/source/index.rst  |   9 +
releasenotes/source/liberty.rst|   6 +
releasenotes/source/unreleased.rst |   

Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Assaf Muller
On Tue, Feb 16, 2016 at 2:52 PM, Matthew Treinish  wrote:
> On Tue, Feb 16, 2016 at 10:07:19AM +0100, Jordan Pittier wrote:
>> Hi list,
>> I understood we need to limit the number of tests and jobs that are run for
>> each Tempest patch because our resources are not unlimited.
>>
>> In Tempest, we have 5 multinode experimental jobs:
>>
>> experimental-tempest-dsvm-multinode-full-dibtest
>> gate-tempest-dsvm-multinode-full
>> gate-tempest-dsvm-multinode-live-migration
>> gate-tempest-dsvm-neutron-multinode-full
>> gate-tempest-dsvm-neutron-dvr-multinode-full
>>
>> These jobs largely overlap with the non-multinode jobs. What about tagging
>> (with a python decorator) each test that really requires multiple nodes and
>> only run those tests as part of the multinode jobs ?
>
> So I don't think this is wise. I'm fine with adding a tag (or more
> realistically a new decorator that sets the attr and bakes in the skip checks)
> to mark tests that require more than 1 node to work. But, limiting all the
> multinode jobs to just that set doesn't make too much sense to me. For most of
> those jobs you listed the point is to verify that everything things work the
> same at >1 node, not just features that require more than 1 node. (with likely
> the exception of the live-migration job which I assume just runs live 
> migration
> tests)
>
> What is probably a better question to ask is why we need 5 different 
> multi-node
> jobs in the tempest experimental queue? Tempest will always have a higher than
> average number of tempest-dsvm jobs running because so much of the code is 
> self
> verifying. But, do all of those jobs really improve our coverage of tempest
> code? Like what does the dibtest job buy us? Or why do we need 2 different
> types of neutron deployments running?

I can't speak for the other three jobs, but these two:
gate-tempest-dsvm-neutron-multinode-full
gate-tempest-dsvm-neutron-dvr-multinode-full

Are both in the check queue and are non-voting. Both are hovering
around 50% failure rate for a while now. Ihar and Sean (CC'd) are
working on the non-DVR job, solving issues around MTU.

>
> -Matt Treinish
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO]: CI outage yesterday/today

2016-02-16 Thread Dan Prince
Just a quick update about the CI outage today and yesterday. Turns out
our jobs weren't running due to a bad Keystone URL (it was pointing to
localhost:5000 instead of our public SSL endpoint).

We've now fixed that issue and I'm told that as soon as Infra restarts
nodepool (they cache the keystone endpoints) we should start processing
jobs again.

Wait on it...

http://status.openstack.org/zuul/

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest removal of tempest.api.compute.test_authorization

2016-02-16 Thread Matthew Treinish
On Tue, Feb 09, 2016 at 07:41:18PM -0500, Sean Dague wrote:
> As proposed in this patch - https://review.openstack.org/#/c/271467/
> 
> These tests do some manipulation of the request URI to attempt to
> request Nova resources owned by a different project_id. However they all
> basically test the same 7 lines of Nova, and now that Nova doesn't
> require project_id in the url it's actively harmful to moving forward.
> 

Looking at the review it looks like people working on defcore chimed and said
that they are ok with these tests being removed. So my primary concern with
removing the tests seems to be addressed. It looks like all 3 criteria from:

https://wiki.openstack.org/wiki/QA/Tempest-test-removal

have been addressed. (since there is equivalent coverage elsewhere and the
tests haven't ever really failed according to [1]) But, before I'm +2 on the
change I had a couple of questions. Do you think there is a potential exposure
being opened up here across release boundaries? Is there still value in running
the tests on either stable releases or existing deployments? Tempest is used for
more than just gate testing master and I just want to make sure we're not
opening a hole in our coverage to make some nova changes on master easier.

-Matt Treinish

[1] 
http://status.openstack.org/openstack-health/#/tests?groupKey=project=hour=AuthorizationTestJSON


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-02-16 19:47:11 +:
> On Tue, 16 Feb 2016, Doug Hellmann wrote:
> 
> [lots of reassonable stuff snipped]
> 
> > I think we should be looking for
> > ways to say "yes" to new projects, rather than "no."
> 
> I think the opposite is worth thinking about. Maybe we should be
> defaulting to "no". Not because candidates are bad, but because
> unconstrained growth is. OpenStack is already big enough and resources
> are limited. Maybe we should only add stuff that explicitly adds value
> to OpenStack.

If we want to do that, we should change the rules because we put
the current set of rules in place specifically to encourage more
project teams to join officially. We can do that, but that discussion
deserves its own thread.

> 
> For the example of Poppy, there is nothing that requires it be a part
> of OpenStack for it to be useful to OpenStack nor for it to exist as
> a valuable part of the open source world.

Nor is there for lots of our existing official projects. Which ones
should we remove?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Doug Hellmann
Excerpts from Edward Leafe's message of 2016-02-16 13:46:50 -0600:
> On Feb 16, 2016, at 1:30 PM, Doug Hellmann  wrote:
> 
> > So I think the project team is doing everything we've asked.  We
> > changed our policies around new projects to emphasize the social
> > aspects of projects, and community interactions. Telling a bunch
> > of folks that they "are not OpenStack" even though they follow those
> > policies is rather distressing.  I think we should be looking for
> > ways to say "yes" to new projects, rather than "no."
> 
> So if some group creates a 100% open project, and follows all of the Opens, 
> at what point do we consider relevance to OpenStack itself? How about a 100% 
> open text editor? Can we add that, since after all OpenStack code is written 
> with text editors.

We do have a relevance clause. Whether or not Poppy is relevant wasn't
part of the discussion, up to this point.

> 
> CDNs are not part of OpenStack, even if some parts of some projects may use 
> one from time to time. A common interface to CDNs seems to address a problem 
> that is not really part of what OpenStack does.

Can you explain why? Because I see cloud deployments with CDN APIs,
and I think that makes it relevant to clouds. I also see it as
relevant to deployers of OpenStack who want to support interoperable
APIs while still retaining a choice about what backend they implement,
which is exactly what the other OpenStack services do with their
drivers.

> 
> It's great to want to include people, but I think that there is more to it 
> than just whether their practices mirror ours.

OK. Are we changing what the argument related to Poppy is about, though,
to find a different reason to exclude them?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] End of the openstack-qa mailing list

2016-02-16 Thread Matthew Treinish
Hi everyone,

I just wanted to send an announcement to the ML that we've officially shutdown
the openstack-qa ML. We had basically stopped using this completely 2-3 years 
ago
but kept the list alive because of the periodic job results needing a place to
be sent. However, with the introduction of openstack-health last cycle that has
been addressed. Now we can track periodic job test results in a single place at:

http://status.openstack.org/openstack-health/#/g/build_queue/periodic

and we can evolve the UI to suit our needs and address any gaps or limitations.

So last week I changed the ML settings to stop advertising this list exists
on the list of available ML at http://lists.openstack.org . I also made it so
any messages sent to the list are just dropped. I'm not able to completely
delete this ML because doing so would mean we lose the archives, and since
this was once an active ML I wasn't comfortable doing this. The archives can
be found here:

http://lists.openstack.org/pipermail/openstack-qa/

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Dean Troyer
On Tue, Feb 16, 2016 at 1:16 PM, Brant Knudson  wrote:

> This is pretty much the same as your example of specifying a different
> version for the nova API on different requests (except they have a lot more
> than just 2 and 3). We also keep adding routes to v3 each release, so what
> operations are supported keeps changing.
>

The point was more about the client/consumer of the API needing to do it
all at once, which seems to be the perception among the server projects.
If the new features had been incremental that might have been a different
outcome.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Decision of how to manage stable/liberty from Kolla Midcycle

2016-02-16 Thread Steven Dake (stdake)
Comments inline


From: Sam Yaple >
Reply-To: "s...@yaple.net" 
>, "OpenStack Development Mailing List 
(not for usage questions)" 
>
Date: Tuesday, February 16, 2016 at 11:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] Decision of how to manage stable/liberty 
from Kolla Midcycle

On Tue, Feb 16, 2016 at 6:15 PM, Steven Dake (stdake) 
> wrote:
Hey folks,

We held a midcycle Feb 9th and 10th.  The full notes of the midcycle are here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

We had 3 separate ~40 minute sessions on making stable stable.  The reason for 
so many sessions on this topic were that it took a long time to come to an 
agreement about the problem and solution.

There are two major problems with stable:
Stable is hard-pinned to 1.8.2 of docker.  Ansible 1.9.4 is the last version of 
Ansible in the 1 z series coming from Ansible.  Ansible 1.9.4 docker module is 
totally busted with Docker 1.8.3 and later.

Stable uses data containers.  Data containers used with Ansible can result, in 
some very limited instances, such as an upgrade of the data container image, 
data loss.  We didn't really recognize this until recently.  We can't really 
fix Ansible to behave correctly with the data containers.

This point is not correct. This is not an issue with Ansible, rather with 
Docker and persistent data. The solution to this problem is named volumes with 
Docker, which Docker has been moving toward and was release in Docker 1.9.

Agreed.



The solution:
Use the kolla-docker.py module to replace ansible's built in docker module.  
This is not a fork of that module from Ansible's upstream so it has no GPLv3 
licensing concerns.  Instead its freshly written code in master.  This allows 
the Kolla upstream to implement support for any version of docker we prefer.

We will be making 1.9 and possibly 1.10 depending on the outcome of a thin 
containers vote the minimum version of docker required to run stable/liberty.

We will be replacing the data containers with named volumes.  Named volumes 
offer a similar functionality (persistent data containment) in a different 
implementation way.  They were introduced in Docker 1.9, because data 
containers have many shortcomings.

This will require some rework on the playbooks.  Rather then backport the 900+ 
patches that have entered master since liberty, we are going to surgically 
correct the problems with named volumes.  We suspect this work will take 4-6 
weeks to complete and will be less then 15 patches on top of stable/liberty.  
The numbers here are just estimates, it could be more or less, but on that 
order of magnitude.

The above solution is what we decided we would go with, after nearly 3 hours of 
debate ;)  If I got any of that wrong, please feel free to chime in for folks 
that were there.  Note there was a majority of core reviewers present, and 
nobody raised objection to this plan of activity, so I'd consider it voted and 
approved :)  There was not a majority approval for another proposal to backport 
thin containers for neutron which I will handle in a separate email.

Going forward, my personal preference is that we make stable branches a 
low-rate-of-change branch, rather then how it  is misnamed to to imply a high 
rate of backports to fix problems.  We will have further design sessions about 
stable branch maintenance at the Austin ODS.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


To add to this, this would be a y change to Kolla. So this release would be a 
1.1.0 release rather than a 1.0.1 release. y releases are not desired, but in 
this case would be needed to do the changes we purpose.

Thanks Sam, I definitely left this key information out unintentionally and it 
is important; we will tag 1.1.0 when this work is completed and that will not 
present a data loss scenario.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] deprecating Tempest stress framework

2016-02-16 Thread Matthew Treinish
On Fri, Feb 12, 2016 at 10:21:53AM +0100, Koderer, Marc wrote:
> I know that there are folks around that are using it.
> +1 to move it to a separate repo.

That sounds fine to me, let's mark the stress runner as deprecated and before
we remove it someone can spin up a separate repo that owns the stress runner
moving forward. But, I don't think we should hold up marking the deprecation
until that exists. Let's move forward on the deprecation to give users enough
time to prepare that the in-tree version is going away.

-Matt Treinish

> 
> Regards
> Marc
> 
> > On 11 Feb 2016, at 13:59, Daniel Mellado  wrote:
> > 
> > +1 to that, it was my 2nd to-be-deprecated after javelin ;)
> > 
> > El 11/02/16 a las 12:47, Sean Dague escribió:
> >> In order to keep Tempest healthy I feel like it's time to prune things
> >> that are outside of the core mission, especially when there are other
> >> options out there.
> >> 
> >> The stress test framework in tempest is one of those. It builds on other
> >> things in Tempest, but isn't core to it.
> >> 
> >> I'd propose that becomes deprecated now, and removed in Newton. If there
> >> are folks that would like to carry it on from there, I think we should
> >> spin it into a dedicated repository and just have it require tempest.
> >> 
> >>-Sean
> >> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Matthew Treinish
On Tue, Feb 16, 2016 at 10:07:19AM +0100, Jordan Pittier wrote:
> Hi list,
> I understood we need to limit the number of tests and jobs that are run for
> each Tempest patch because our resources are not unlimited.
> 
> In Tempest, we have 5 multinode experimental jobs:
> 
> experimental-tempest-dsvm-multinode-full-dibtest
> gate-tempest-dsvm-multinode-full
> gate-tempest-dsvm-multinode-live-migration
> gate-tempest-dsvm-neutron-multinode-full
> gate-tempest-dsvm-neutron-dvr-multinode-full
> 
> These jobs largely overlap with the non-multinode jobs. What about tagging
> (with a python decorator) each test that really requires multiple nodes and
> only run those tests as part of the multinode jobs ?

So I don't think this is wise. I'm fine with adding a tag (or more
realistically a new decorator that sets the attr and bakes in the skip checks)
to mark tests that require more than 1 node to work. But, limiting all the
multinode jobs to just that set doesn't make too much sense to me. For most of
those jobs you listed the point is to verify that everything things work the
same at >1 node, not just features that require more than 1 node. (with likely
the exception of the live-migration job which I assume just runs live migration
tests)

What is probably a better question to ask is why we need 5 different multi-node
jobs in the tempest experimental queue? Tempest will always have a higher than
average number of tempest-dsvm jobs running because so much of the code is self
verifying. But, do all of those jobs really improve our coverage of tempest
code? Like what does the dibtest job buy us? Or why do we need 2 different
types of neutron deployments running?

-Matt Treinish




signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker][networking-sfc] Tacker VNFFG - SFC integration updates

2016-02-16 Thread Sridhar Ramaswamy
Hi folks,

Based on the recent discussions in [1] & [2] we are proposing to rearrange
our tasks related to Tacker's VNFFG component integrating with the lower
level SFC APIs. We now plan to integrate with networking-sfc APIs first.

Our original plan, or rather the sequence of tasks, were,

1) Tacker VNFFG plugin (trozet)
2) Tacker VNFFG plugin --> ODL/netvirtsfc driver backend (trozet)
3) Tacker VNFFG plugin --> networking-sfc driver backend (s3wong)
4) networking-sfc --> ODL/netvirtsfc driver backend (TBD)

We now propose to alter the sequence of tasks to something like this,

1) Tacker VNFFG plugin (trozet)
2) Tacker VNFFG plugin --> networking-sfc driver backend (s3wong) - *that
is, introduce this as the first driver backend for Tacker VNFFG instead of
direct ODL/netvirtsfc driver*
3) Use the code written by Tim (trozet) for Tacker's ODL/netvirtsfc driver
backend and help to further networking-sfc's ODL integration efforts.

Quick note on (3) above, networking-sfc already has a driver for ONOS SDN
Controller [3]. So it should reasonably easy to bring in a ODL driver for
networking-sfc. This might be slightly longer than what we ideally wanted
for some short-term PoCs but it positions us to get the eventual end goal
faster.

Comments, suggestions welcome!

thanks,
Sridhar

[1] https://review.openstack.org/#/c/276417
[2]
http://eavesdrop.openstack.org/meetings/tacker/2016/tacker.2016-02-16-17.00.log.html
[3]
https://github.com/openstack/networking-onos/blob/master/networking_onos/services/sfc/driver.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-16 Thread Chris Dent

On Tue, 16 Feb 2016, Doug Hellmann wrote:

[lots of reassonable stuff snipped]


I think we should be looking for
ways to say "yes" to new projects, rather than "no."


I think the opposite is worth thinking about. Maybe we should be
defaulting to "no". Not because candidates are bad, but because
unconstrained growth is. OpenStack is already big enough and resources
are limited. Maybe we should only add stuff that explicitly adds value
to OpenStack.

For the example of Poppy, there is nothing that requires it be a part
of OpenStack for it to be useful to OpenStack nor for it to exist as
a valuable part of the open source world.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-sfc] networking-sfc project IRC meeting

2016-02-16 Thread Cathy Zhang
Hi,

I am in a container conference and won't be able to chair this week's IRC 
meeting.
Paul Carver has volunteered to chair this week's meeting. Thanks Paul!
If there are any topics you would like to discuss, you can post them on the 
meeting wiki page

https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Edward Leafe
On Feb 16, 2016, at 1:30 PM, Doug Hellmann  wrote:

> So I think the project team is doing everything we've asked.  We
> changed our policies around new projects to emphasize the social
> aspects of projects, and community interactions. Telling a bunch
> of folks that they "are not OpenStack" even though they follow those
> policies is rather distressing.  I think we should be looking for
> ways to say "yes" to new projects, rather than "no."

So if some group creates a 100% open project, and follows all of the Opens, at 
what point do we consider relevance to OpenStack itself? How about a 100% open 
text editor? Can we add that, since after all OpenStack code is written with 
text editors.

CDNs are not part of OpenStack, even if some parts of some projects may use one 
from time to time. A common interface to CDNs seems to address a problem that 
is not really part of what OpenStack does.

It's great to want to include people, but I think that there is more to it than 
just whether their practices mirror ours.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Fox, Kevin M
Most of the "open core" issues we've run into time and time again were because 
one company had control of both an open and a closed edition and used their 
sole control to prevent features from entering the open edition because they 
wanted to make money off of it in the closed edition.

This most certainly does not apply to Poppy. They have both things that make an 
open source project truly open. an open license, and open governance. Its the 
latter thing that prevents vendor lockin. The irony is, Poppy wants to join 
OpenStack to cement that governance to prevent vendor control which which is 
being used to try and exclude it.

The nice thing about standardizing on Poppy would be it might lower the bar for 
a truly open CDN to be created because there might finally be visible enough 
demand for it. "My users want the api, and there isn't an open backend.. let me 
go write one"

The OpenSource world steps up with an implementation of something when it 
finally decides to scratch the itch. an open CDN has not been a significant 
enough itch to scratch so far.

An open api for CDN's on the other hand does seem like a useful thing to me, 
separate from an open CDN to back it. I'd greatly prefer to write apps against 
such a system instead of targeting the proprietary api's directly. It makes the 
open source code I write more open.

Yes, this is a sucky position to be in, but I think in this one case, its the 
lessor of two evils.

Thanks,
Kevin

From: Dean Troyer [dtro...@gmail.com]
Sent: Tuesday, February 16, 2016 10:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

On Tue, Feb 16, 2016 at 11:51 AM, Amit Gandhi 
> wrote:
Poppy intends to be an abstraction API over the various CDNs available.  We do 
not want to be in the business of building a CDN itself.

Specific to the Poppy discussion, I think this is another point that makes 
Poppy not part of OpenStack: none of the services it wants to abstract are part 
of OpenStack.  I came up with another example of a project that takes the 
Compute API and translates it to gce or aws calls but we actually have this 
situation with the ESX hypervisor driver and multiple Cinder backends to 
commercial products.  But in those cases, the project is otherwise useful to an 
OpenStack deployment that does not include those products.

Now for the actual thread topic of open core and non-free backends.  We seem to 
be haggling over the declaration of one or more specific projects as 'open 
core-like', where the real discussion is in defining the general terms.  I have 
always thought of 'open core' in the Eucalyptus sense like what partially 
spurred the creation of Nova in the first place.  The same entity controlled 
both an 'open source' project and a commercial product based on it, where 
certain features/capabilities were only available in the non-free version.

What I think is important here is 'the same entity'.  This is where we need to 
be concerned, specifically with organizations that attempt to extract value 
from OpenStack yet hinder our advancement in return by holding back 
contributions.  And this may be the one place where the 'viral' nature of the 
GPL would have been useful to us in a business relationship. [[NO I do not 
intend to open that can'o'worms, only to illustrate our lack of that built-in 
mechanism to lessen the impact of open core]]

dt


--

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Doug Hellmann
Excerpts from Dean Troyer's message of 2016-02-16 12:57:58 -0600:
> On Tue, Feb 16, 2016 at 11:51 AM, Amit Gandhi 
> wrote:
> 
> > Poppy intends to be an abstraction API over the various CDNs available.
> > We do not want to be in the business of building a CDN itself.
> >
> 
> Specific to the Poppy discussion, I think this is another point that makes
> Poppy not part of OpenStack: none of the services it wants to abstract are
> part of OpenStack.  I came up with another example of a project that takes
> the Compute API and translates it to gce or aws calls but we actually have
> this situation with the ESX hypervisor driver and multiple Cinder backends
> to commercial products.  But in those cases, the project is otherwise
> useful to an OpenStack deployment that does not include those products.

I'm not sure why it should be abstracting anything that's already part
of OpenStack. That only makes sense for some types of projects, and
while "CDN-as-a-service" might be interesting that's not what the Poppy
team is building.

> Now for the actual thread topic of open core and non-free backends.  We
> seem to be haggling over the declaration of one or more specific projects
> as 'open core-like', where the real discussion is in defining the general
> terms.  I have always thought of 'open core' in the Eucalyptus sense like
> what partially spurred the creation of Nova in the first place.  The same
> entity controlled both an 'open source' project and a commercial product
> based on it, where certain features/capabilities were only available in the
> non-free version.

I brought up Poppy to illustrate the point that definitions that
seem clear in the abstract can be less clear in the concrete. All
of the Poppy code itself is open. There is no more advanced or more
complete version available elsewhere. Yes, it requires some other
service to run.  None of the service providers control the project,
which as you point out below is the source of concern with "open
core".

So I think the project team is doing everything we've asked.  We
changed our policies around new projects to emphasize the social
aspects of projects, and community interactions. Telling a bunch
of folks that they "are not OpenStack" even though they follow those
policies is rather distressing.  I think we should be looking for
ways to say "yes" to new projects, rather than "no."

Doug

> What I think is important here is 'the same entity'.  This is where we need
> to be concerned, specifically with organizations that attempt to extract
> value from OpenStack yet hinder our advancement in return by holding back
> contributions.  And this may be the one place where the 'viral' nature of
> the GPL would have been useful to us in a business relationship. [[NO I do
> not intend to open that can'o'worms, only to illustrate our lack of that
> built-in mechanism to lessen the impact of open core]]
> 
> dt
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Nominating Masahito for core

2016-02-16 Thread Peter Balland
+1

From: Tim Hinrichs >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 16, 2016 at 11:15 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Congress] Nominating Masahito for core

Hi all,

I'm writing to nominate Masahito Muroi for the Congress core team.  He's been a 
consistent contributor for the entirety of Liberty and Mitaka, both in terms of 
code contributions and reviews.  In addition to volunteering for bug fixes and 
blueprints, he initiated and carried out the design and implementation of a new 
class of datasource driver that allows external datasources to push data into 
Congress.  He has also been instrumental in migrating Congress to its new 
distributed architecture.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Clark Boylan
On Tue, Feb 16, 2016, at 10:42 AM, Assaf Muller wrote:
> On Tue, Feb 16, 2016 at 12:26 PM, Clark Boylan 
> wrote:
> > On Tue, Feb 16, 2016, at 01:07 AM, Jordan Pittier wrote:
> >> Hi list,
> >> I understood we need to limit the number of tests and jobs that are run
> >> for
> >> each Tempest patch because our resources are not unlimited.
> >>
> >> In Tempest, we have 5 multinode experimental jobs:
> >>
> >> experimental-tempest-dsvm-multinode-full-dibtest
> >> gate-tempest-dsvm-multinode-full
> >> gate-tempest-dsvm-multinode-live-migration
> >> gate-tempest-dsvm-neutron-multinode-full
> >> gate-tempest-dsvm-neutron-dvr-multinode-full
> >>
> >> These jobs largely overlap with the non-multinode jobs. What about
> >> tagging
> >> (with a python decorator) each test that really requires multiple nodes
> >> and
> >> only run those tests as part of the multinode jobs ?
> >
> > One of the goals I had was to hopefully replace the single node jobs
> > with the multinode jobs because as you point out there is a lot of
> > redundancy and 2 VMs < 3 VMs. One of the prerequisites for this to
> > happen is to have an easy way to reproduce the multinode test envs using
> > something like vagrant. I have been meaning to work on that this cycle
> > but adding new cloud resources (and keeping existing resources happy)
> > have taken priority.
> 
> These are not conflicting efforts, are they? We could attack it on
> both fronts: Send a patch that tags a dozen (?) or so tests with
> 'multinode' run only those in the multinode jobs. You could accomplish
> that almost immediately. In parallel work on replacing the single node
> jobs with multinode (And then change their tests regex from
> 'multinode' back to full).
> 

If you do this then you no longer know if multinode works completely so
you can regress while you only run some subset of tests. If that happens
(and history has shown the likelyhood is high) then you have to debug
multinode all over again to get it working.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] New BP for live migration with direct pci passthru

2016-02-16 Thread Ian Wells
In general, while you've applied this to networking (and it's not the first
time I've seen this proposal), the same technique will work with any device
- PF or VF, networking or other:

- notify the VM via an accepted channel that a device is going to be
temporarily removed
- remove the device
- migrate the VM
- notify the VM that the device is going to be returned
- reattach the device

Note that, in the above, I've not used said 'PF', 'VF', 'NIC' or 'qemu'.

You would need to document what assumptions the guest is going to make (the
reason I mention this is I think it's safe to assume the device has been
recently reset here, but for a network device you might want to consider
whether the device will have the same MAC address or number of tx and rx
buffers, for instance).

The method of notification I've deliberately skipped here; you have an
answer for qemu, qemu is not the only hypervisor in the world so this will
clearly be variable.  A metadata server mechanism is another possibility.

Half of what you've described is one model of how the VM might choose to
deal with that (and a suggestion that's come up before, in fact) - that's a
model we would absolutely want Openstack to support (and I think the above
is sufficient to support it), but we can't easily mandate how VMs behave,
so from the Openstack perspective it's more a recommendation than anything
we can code up.


On 15 February 2016 at 23:25, Xie, Xianshan  wrote:

> Hi, Fawad,
>
>
>
>
>
> > Can you please share the link?
>
>
> https://blueprints.launchpad.net/nova/+spec/direct-pci-passthrough-live-migration
>
>
>
> Thanks in advance.
>
>
>
>
>
> Best regards,
>
> xiexs
>
>
>
> *From:* Fawad Khaliq [mailto:fa...@plumgrid.com]
> *Sent:* Tuesday, February 16, 2016 1:19 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [nova][neutron] New BP for live migration
> with direct pci passthru
>
>
>
> On Mon, Feb 1, 2016 at 3:25 PM, Xie, Xianshan 
> wrote:
>
> Hi, all,
>   I have registered a new BP about the live migration with a direct pci
> passthru device.
>   Could you please help me to review it? Thanks in advance.
>
>
>
> Can you please share the link?
>
>
>
>
> The following is the details:
>
> --
> SR-IOV has been supported for a long while, in the community's point of
> view,
> the pci passthru with Macvtap can be live migrated possibly, but the
> direct pci passthru
> seems hard to implement the migration as the passthru VF is totally
> controlled by
> the VMs so that some internal states may be unknown by the hypervisor.
>
> But we think the direct pci passthru model can also be live migrated with
> the
> following combination of a series of technology/operation based on the
> enhanced
> Qemu-Geust-Agent(QGA) which has already been supported by nova.
>1)Bond the direct pci passthru NIC with a virtual NIC.
>  This will keep the network connectivity during the live migration.
>2)Unenslave the direct pci passthru NIC
>3)Hot-unplug the direct pci passthru NIC
>4)Live-migrate guest with the virtual NIC
>5)Hot-plug the direct pci passthru NIC on the target host
>6)Enslave the direct pci passthru NIC
>
> And more inforation about this concept can refer to [1].
> [1]https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
>
> --
>
> Best regards,
> Xiexs
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Brant Knudson
On Tue, Feb 16, 2016 at 1:04 PM, Dean Troyer  wrote:

>
>
> On Tue, Feb 16, 2016 at 12:17 PM, Sean Dague  wrote:
>
>> Honestly, doing per API call version switching is probably going to end
>> in tears. HTTP is stateless, so it's allowed, but it will end in tears
>> of complexity as you need to self modify resources before passing them
>> back. Or follow links that don't exist.
>>
>
> Maybe.  But any code that knows it is using a specific version already is
> handling the special cases.  We are trading the speed of implementation for
> the juggling of multiple versions, no matter if they are labelled
> sequentially or via semver.  I've seen places that mixed Identity v2 and v3
> because of the way the APIs worked and were more convenient.  Same problem,
> bigger delta per API rev vs only one rev.
>
> It's also worth looking at the changes we've actually been making here
>> instead of theoretical examples. The amount of effort to make an
>> application use 2.20 instead of 2.1 is pretty minimal.
>
>
> Sure, but there is nothing in place to prevent that contrived example. The
> first time an addition is made of what would have been an extension in the
> past we will be there.  [and no, this is in no way a defense of extensions
> ;).
>
> If Identity had micro-versioned their way between V2 and v3 would the
> other projects been able to convert faster due to not having to do it all
> at once?
>
>
You didn't have to convert from V2 to V3 all at once, and we didn't. I
assume there is still some use of V2 in gate runs in addition to using V3.

Keystone supports the V2 API on /v2.0 and the V3 API on /v3. So you could
make some requests to V2 and other requests to V3. You could get a token
using V2 and use it on V3, or get a token using V3 and use it to do V2
operations. You can't use new V3 features when accessing the V2 API (like
domains).

This is pretty much the same as your example of specifying a different
version for the nova API on different requests (except they have a lot more
than just 2 and 3). We also keep adding routes to v3 each release, so what
operations are supported keeps changing.

 - Brant

dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Sean M. Collins
Doug Hellmann wrote:
> Is there? I thought the point was OpenCDN isn't actually usable. Maybe
> someone from the Poppy team can provide more details about that.

That is certainly a problem. However I think I would lean on Sean
Dague's argument about how Neutron had an open source solution that
needed a lot of TLC. The point being that at least they had 1 option.
Not zero options.

And Dean's point about gce and aws API translation into OpenStack
Compute is also very relevant. We have precedence for doing API
translation layers that take some foreign API and translate it into
"openstackanese" 

I think Poppy would have a lot easier time getting into OpenStack were
it to take the steps to build a back-end that would do the required
operations to create a CDN - using a multi-region OpenStack cloud. Or
even adopting an open source CDN. Something! Anything really!

Yes, it's a lot of work, but without that, as I think others have
stated - where's the OpenStack part?

Like that Wendy's commercial from way back: "Where's the beef?"

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Nominating Masahito for core

2016-02-16 Thread Tim Hinrichs
Hi all,

I'm writing to nominate Masahito Muroi for the Congress core team.  He's
been a consistent contributor for the entirety of Liberty and Mitaka, both
in terms of code contributions and reviews.  In addition to volunteering
for bug fixes and blueprints, he initiated and carried out the design and
implementation of a new class of datasource driver that allows external
datasources to push data into Congress.  He has also been instrumental in
migrating Congress to its new distributed architecture.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Sean M. Collins
Thomas Goirand wrote:
> s/I dislike/is not free software/ [*]
> 
> It's not a mater of taste. Having Poppy requiring a non-free component,
> even indirectly (ie: the Oracle JVM that CassandraDB needs), makes it
> non-free.

Your definition of non-free versus free, if I am not mistaken, is
based on GPLv3. OpenStack is not GPL licensed

I understand and respect the point of view of the Debian project on
this, however OpenStack is an Apache licensed project. So, this is
entirely your bikeshed.

> Ensuring we really only accept free software is not a bikeshed color
> discussion, it is really important. And that's the same topic as using
> non-free CDN solution (see below).

It is a bikeshed, because you are injecting a debate over the freedoms of
Apache license vs. GPLv3 into this discussion. Which you do, on many
occasions. I respect this, but at some point it does hijack the original
intent of the thread. Which is now happening.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Dean Troyer
On Tue, Feb 16, 2016 at 12:17 PM, Sean Dague  wrote:

> Honestly, doing per API call version switching is probably going to end
> in tears. HTTP is stateless, so it's allowed, but it will end in tears
> of complexity as you need to self modify resources before passing them
> back. Or follow links that don't exist.
>

Maybe.  But any code that knows it is using a specific version already is
handling the special cases.  We are trading the speed of implementation for
the juggling of multiple versions, no matter if they are labelled
sequentially or via semver.  I've seen places that mixed Identity v2 and v3
because of the way the APIs worked and were more convenient.  Same problem,
bigger delta per API rev vs only one rev.

It's also worth looking at the changes we've actually been making here
> instead of theoretical examples. The amount of effort to make an
> application use 2.20 instead of 2.1 is pretty minimal.


Sure, but there is nothing in place to prevent that contrived example. The
first time an addition is made of what would have been an extension in the
past we will be there.  [and no, this is in no way a defense of extensions
;).

If Identity had micro-versioned their way between V2 and v3 would the other
projects been able to convert faster due to not having to do it all at once?

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Dean Troyer
On Tue, Feb 16, 2016 at 11:51 AM, Amit Gandhi 
wrote:

> Poppy intends to be an abstraction API over the various CDNs available.
> We do not want to be in the business of building a CDN itself.
>

Specific to the Poppy discussion, I think this is another point that makes
Poppy not part of OpenStack: none of the services it wants to abstract are
part of OpenStack.  I came up with another example of a project that takes
the Compute API and translates it to gce or aws calls but we actually have
this situation with the ESX hypervisor driver and multiple Cinder backends
to commercial products.  But in those cases, the project is otherwise
useful to an OpenStack deployment that does not include those products.

Now for the actual thread topic of open core and non-free backends.  We
seem to be haggling over the declaration of one or more specific projects
as 'open core-like', where the real discussion is in defining the general
terms.  I have always thought of 'open core' in the Eucalyptus sense like
what partially spurred the creation of Nova in the first place.  The same
entity controlled both an 'open source' project and a commercial product
based on it, where certain features/capabilities were only available in the
non-free version.

What I think is important here is 'the same entity'.  This is where we need
to be concerned, specifically with organizations that attempt to extract
value from OpenStack yet hinder our advancement in return by holding back
contributions.  And this may be the one place where the 'viral' nature of
the GPL would have been useful to us in a business relationship. [[NO I do
not intend to open that can'o'worms, only to illustrate our lack of that
built-in mechanism to lessen the impact of open core]]

dt


-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] How are consumers/operators new to openstack supposed to know about upper-constraints?

2016-02-16 Thread Ian Cordasco
On Feb 16, 2016 11:04 AM, "Sean Dague"  wrote:
>
> On 02/16/2016 11:48 AM, Ian Cordasco wrote:
> >
> >
> > -Original Message-
> > From: Matt Riedemann 
> > Reply: Matt Riedemann 
> > Date: February 16, 2016 at 09:30:49
> > To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>, openstack-operat...@lists.openstack.org

> > Subject:  [Openstack-operators] How are consumers/operators new to
openstack supposed to know about upper-constraints?
> >
> >> We have a team just upgrading to Liberty and they are having problems.
> >> While running down their list of packages they are using, I noticed
they
> >> have os-brick 0.8.0 which is the latest version (from mitaka).
> >>
> >> However, os-brick in stable/liberty upper-constraints is at 0.6.0 [1].
> >>
> >> So while I don't think their immediate problems are due to using an
> >> untested version of os-brick on stable/liberty, they are obviously just
> >> picking up the latest versions of dependencies because they aren't
> >> capped in requirements. That could eventually bite them because there
> >> are things that don't work together in liberty depending on what
> >> versions you have [2].
> >>
> >> My main question is, how are we expecting consumers/deployers of
> >> openstack to know about the upper-constraints file? Where is that
> >> advertised in the manuals?
> >>
> >> There is nothing in the Liberty release notes [3].
> >>
> >> I'm sure there is probably something in the openstack/requirements repo
> >> devref, but I wouldn't expect a deployer to know that repo exists let
> >> alone to go off and read it's docs and understand how it applies to
them
> >> (a lot of openstack developers probably don't know about the reqs repo
> >> or what it does).
> >>
> >> Does the operator community have any tips or know something that I
> >> don't? I think ops people that have been around awhile are just aware
> >> because it's been coming for a few releases now so they are aware of
the
> >> magical unicorn and have sought out info on it, but what about new
> >> deployments?
> >>
> >> [1]
> >>
https://github.com/openstack/requirements/blob/0e8a4136b4e9e91293d46b99879c966e3bddd9bd/upper-constraints.txt#L181
> >> [2] https://bugs.launchpad.net/oslo.service/+bug/1529594
> >> [3] https://wiki.openstack.org/wiki/ReleaseNotes/Liberty
> >
> > This is actually a good question. I think some assumptions were made
about how people are deploying OpenStack. I think those assumptions are
along the lines of:
> >
> > - Operators are deploying with downstream packages (from Ubuntu, Red
Hat, etc.)
> > - Operators are using something like the Chef Cookbooks, Puppet
Modules, or the Ansible Playbooks that ideally handle all of this for them.
> >
> > I know OpenStack Ansible takes upper-constraints into consideration
when it's building wheel repositories for dependencies. I would guess that
the other deployment projects do something similar or also rest upon the
usage of downstream packages. I think we (the developers) tend to think
that anyone not using downstream packages is doing it wrong since they
handle dependency management for us (as presented to the end user).
> >
> > I'm not sure what the right solution will be because I would be
surprised if some more explicit form of upper-constraints present in
requirements of each project would be argued against as too much
work/specificity.
>
> Also, churn. We got rid of this model of narrow ranges in repositories
> for a reason, because it very quickly turns into an incompatible
> collision between projects / libraries. Maybe if this only applied to
> top level projects which could never be imported by others it would be
> ok, I don't know.
>
> > I also think this clearly demonstrates why upper caps (although
painful) are at least informational even when we have upper-constraints
protecting the gates.
>
> I think due to the limitations on pip and python packaging we've largely
> said (maybe too implictly) that you have to have an installer layer in
> OpenStack, and it has to understand requirements at the install layer.
>
> That might be distro packages. That might be any of the install projects
> in the big tent (chef, puppet, ansible, fuel). That might be devstack.
> But it's definitely something between you and 'pip install nova'.

Ignoring, of course, the fact that unless you're taking the ansible
approach "pip install nova" wouldn't ever work. ;-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-16 Thread Ade Lee
+1

On Mon, 2016-02-15 at 11:45 -0600, Douglas Mendizábal wrote:
> Hi All,
> 
> I would like to nominate Fernando Diaz for the Barbican Core team.
> Fernando has been an enthusiastic contributor since joining the
> Barbican team.  He is currently the most active non-core reviewer on
> Barbican projects for the last 90 days. [1]  He’s got an excellent
> eye
> for review and I think he would make an excellent addition to the
> team.
> 
> As a reminder to our current core reviewers, our Core Team policy is
> documented in the wiki. [2]  So please reply to this thread with your
> votes.
> 
> Thanks,
> - Douglas Mendizábal
> 
> [1] http://stackalytics.com/report/contribution/barbican-group/90
> [2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Assaf Muller
On Tue, Feb 16, 2016 at 12:26 PM, Clark Boylan  wrote:
> On Tue, Feb 16, 2016, at 01:07 AM, Jordan Pittier wrote:
>> Hi list,
>> I understood we need to limit the number of tests and jobs that are run
>> for
>> each Tempest patch because our resources are not unlimited.
>>
>> In Tempest, we have 5 multinode experimental jobs:
>>
>> experimental-tempest-dsvm-multinode-full-dibtest
>> gate-tempest-dsvm-multinode-full
>> gate-tempest-dsvm-multinode-live-migration
>> gate-tempest-dsvm-neutron-multinode-full
>> gate-tempest-dsvm-neutron-dvr-multinode-full
>>
>> These jobs largely overlap with the non-multinode jobs. What about
>> tagging
>> (with a python decorator) each test that really requires multiple nodes
>> and
>> only run those tests as part of the multinode jobs ?
>
> One of the goals I had was to hopefully replace the single node jobs
> with the multinode jobs because as you point out there is a lot of
> redundancy and 2 VMs < 3 VMs. One of the prerequisites for this to
> happen is to have an easy way to reproduce the multinode test envs using
> something like vagrant. I have been meaning to work on that this cycle
> but adding new cloud resources (and keeping existing resources happy)
> have taken priority.

These are not conflicting efforts, are they? We could attack it on
both fronts: Send a patch that tags a dozen (?) or so tests with
'multinode' run only those in the multinode jobs. You could accomplish
that almost immediately. In parallel work on replacing the single node
jobs with multinode (And then change their tests regex from
'multinode' back to full).

>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Decision of how to manage stable/liberty from Kolla Midcycle

2016-02-16 Thread Sam Yaple
On Tue, Feb 16, 2016 at 6:15 PM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> We held a midcycle Feb 9th and 10th.  The full notes of the midcycle are
> here:
> https://etherpad.openstack.org/p/kolla-mitaka-midcycle
>
> We had 3 separate ~40 minute sessions on making stable stable.  The reason
> for so many sessions on this topic were that it took a long time to come to
> an agreement about the problem and solution.
>
> There are two major problems with stable:
> Stable is hard-pinned to 1.8.2 of docker.  Ansible 1.9.4 is the last
> version of Ansible in the 1 z series coming from Ansible.  Ansible 1.9.4
> docker module is totally busted with Docker 1.8.3 and later.
>
> Stable uses data containers.  Data containers used with Ansible can
> result, in some very limited instances, such as an upgrade of the data
> container image, *data loss*.  We didn't really recognize this until
> recently.  We can't really fix Ansible to behave correctly with the data
> containers.
>

This point is not correct. This is not an issue with Ansible, rather with
Docker and persistent data. The solution to this problem is named volumes
with Docker, which Docker has been moving toward and was release in Docker
1.9.


>
> The solution:
> Use the kolla-docker.py module to replace ansible's built in docker
> module.  This is not a fork of that module from Ansible's upstream so it
> has no GPLv3 licensing concerns.  Instead its freshly written code in
> master.  This allows the Kolla upstream to implement support for any
> version of docker we prefer.
>
> We will be making 1.9 and possibly 1.10 depending on the outcome of a thin
> containers vote the minimum version of docker required to run
> stable/liberty.
>
> We will be replacing the data containers with named volumes.  Named
> volumes offer a similar functionality (persistent data containment) in a
> different implementation way.  They were introduced in Docker 1.9, because
> data containers have many shortcomings.
>
> This will require some rework on the playbooks.  Rather then backport the
> 900+ patches that have entered master since liberty, we are going to
> surgically correct the problems with named volumes.  We suspect this work
> will take 4-6 weeks to complete and will be less then 15 patches on top of
> stable/liberty.  The numbers here are just estimates, it could be more or
> less, but on that order of magnitude.
>
> The above solution is what we decided we would go with, after nearly 3
> hours of debate ;)  If I got any of that wrong, please feel free to chime
> in for folks that were there.  Note there was a majority of core reviewers
> present, and nobody raised objection to this plan of activity, so I'd
> consider it voted and approved :)  There was not a majority approval for
> another proposal to backport thin containers for neutron which I will
> handle in a separate email.
>
> Going forward, my personal preference is that we make stable branches a
> low-rate-of-change branch, rather then how it  is misnamed to to imply a
> high rate of backports to fix problems.  We will have further design
> sessions about stable branch maintenance at the Austin ODS.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
To add to this, this would be a y change to Kolla. So this release would be
a 1.1.0 release rather than a 1.0.1 release. y releases are not desired,
but in this case would be needed to do the changes we purpose.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sean Dague
On 02/16/2016 01:13 PM, Dean Troyer wrote:
> On Tue, Feb 16, 2016 at 8:34 AM, Andrew Laski  > wrote:
> ... 
> 
>  It's easy enough to think that users will just read the docs and
> carefully consider every version increment that they want to consume
> but when they've been on version 2.7 for a while and a new thing
> comes out in 2.35 that they want they need to fully digest the
> implications of all 27 intervening versions purely through docs and
> with the understanding that literally almost anything about the
> semantics can have changed. So while I love the freedom that it
> provides to developers I think it would be useful to have a small
> set of constraints in place that helps users. Of course all of my
> ideas have been duds so far and perhaps that's because I'm imagining
> future scenarios that won't come to pass or that we don't care
> about. But something has me concerned and I can't quite get my
> finger on it.
> 
> 
> Contrived example alert:
> 
> API version 2.10 adds a pile of parameters to POST /foo/.
> API version 2.35 fixes a problem in GET /bar/details.
> 
> As a client, I might need the 2.35 fix before I have finished
> implementing the big new feature in 2.10 (and intervening) changes.  The
> clients will then use the GLOBAL_API_VERSION=2.7 as a default and use a
> local 2.35 version for the /bar requests.
> 
> This may get fun to manage over time, but allows the client to opt-in to
> new API features without having to swallow the entire thing.  It also
> increases the pressure to make sure the docs are up to snuff for each
> specific API bump.

Honestly, doing per API call version switching is probably going to end
in tears. HTTP is stateless, so it's allowed, but it will end in tears
of complexity as you need to self modify resources before passing them
back. Or follow links that don't exist.

It's also worth looking at the changes we've actually been making here
instead of theoretical examples. The amount of effort to make an
application use 2.20 instead of 2.1 is pretty minimal.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Decision of how to manage stable/liberty from Kolla Midcycle

2016-02-16 Thread Steven Dake (stdake)
Hey folks,

We held a midcycle Feb 9th and 10th.  The full notes of the midcycle are here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

We had 3 separate ~40 minute sessions on making stable stable.  The reason for 
so many sessions on this topic were that it took a long time to come to an 
agreement about the problem and solution.

There are two major problems with stable:
Stable is hard-pinned to 1.8.2 of docker.  Ansible 1.9.4 is the last version of 
Ansible in the 1 z series coming from Ansible.  Ansible 1.9.4 docker module is 
totally busted with Docker 1.8.3 and later.

Stable uses data containers.  Data containers used with Ansible can result, in 
some very limited instances, such as an upgrade of the data container image, 
data loss.  We didn't really recognize this until recently.  We can't really 
fix Ansible to behave correctly with the data containers.

The solution:
Use the kolla-docker.py module to replace ansible's built in docker module.  
This is not a fork of that module from Ansible's upstream so it has no GPLv3 
licensing concerns.  Instead its freshly written code in master.  This allows 
the Kolla upstream to implement support for any version of docker we prefer.

We will be making 1.9 and possibly 1.10 depending on the outcome of a thin 
containers vote the minimum version of docker required to run stable/liberty.

We will be replacing the data containers with named volumes.  Named volumes 
offer a similar functionality (persistent data containment) in a different 
implementation way.  They were introduced in Docker 1.9, because data 
containers have many shortcomings.

This will require some rework on the playbooks.  Rather then backport the 900+ 
patches that have entered master since liberty, we are going to surgically 
correct the problems with named volumes.  We suspect this work will take 4-6 
weeks to complete and will be less then 15 patches on top of stable/liberty.  
The numbers here are just estimates, it could be more or less, but on that 
order of magnitude.

The above solution is what we decided we would go with, after nearly 3 hours of 
debate ;)  If I got any of that wrong, please feel free to chime in for folks 
that were there.  Note there was a majority of core reviewers present, and 
nobody raised objection to this plan of activity, so I'd consider it voted and 
approved :)  There was not a majority approval for another proposal to backport 
thin containers for neutron which I will handle in a separate email.

Going forward, my personal preference is that we make stable branches a 
low-rate-of-change branch, rather then how it  is misnamed to to imply a high 
rate of backports to fix problems.  We will have further design sessions about 
stable branch maintenance at the Austin ODS.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Dean Troyer
On Tue, Feb 16, 2016 at 8:34 AM, Andrew Laski  wrote:
...

>  It's easy enough to think that users will just read the docs and
> carefully consider every version increment that they want to consume but
> when they've been on version 2.7 for a while and a new thing comes out in
> 2.35 that they want they need to fully digest the implications of all 27
> intervening versions purely through docs and with the understanding that
> literally almost anything about the semantics can have changed. So while I
> love the freedom that it provides to developers I think it would be useful
> to have a small set of constraints in place that helps users. Of course all
> of my ideas have been duds so far and perhaps that's because I'm imagining
> future scenarios that won't come to pass or that we don't care about. But
> something has me concerned and I can't quite get my finger on it.
>

Contrived example alert:

API version 2.10 adds a pile of parameters to POST /foo/.
API version 2.35 fixes a problem in GET /bar/details.

As a client, I might need the 2.35 fix before I have finished implementing
the big new feature in 2.10 (and intervening) changes.  The clients will
then use the GLOBAL_API_VERSION=2.7 as a default and use a local 2.35
version for the /bar requests.

This may get fun to manage over time, but allows the client to opt-in to
new API features without having to swallow the entire thing.  It also
increases the pressure to make sure the docs are up to snuff for each
specific API bump.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-16 Thread Thomas Goirand
I wrote it on IRC, but I want to write it on this list too, since I am
one of the requesters.

On 02/16/2016 02:16 AM, David Moreau Simard wrote:
> So is it implied that a version "8.0.0b1" of a puppet module works
> with the "8.0.0b1" of it's parent project ?

If it was possible to somehow have the tags correlated to the release
cycle of server projects, it'd be perfect. But I personally don't care
*that* much for *this* release, since these tags will be a first. Let's
get this done gradually.

So let's say there's b1 tags cut tomorrow, matching what's in trunk
right now, that'd be awesome already.

It'd be even nicer if we had that, and then a b3 tag ASAP after the b3
of the server components. Then no b2 tags? Who cares if all parties
involved are aware of what's happening and what these tags mean.

On 02/16/2016 02:16 AM, David Moreau Simard wrote:
> Would there be times where the puppet module's tagged release would
> lag behind upstream (say, a couple days because we need to merge new
> things)

This is expected, and perfectly fine!

> and if so, does tagging lose some of it's value ?

Absolutely not. They would have value for everyone: Debian, Mirantis
OpenStack and RDO, plus as well Puppet people too, to who we will be
able to report failures earlier, with a reference point in mind (ie:
reporting we had an issue with Mitaka b3 version of puppet-FOO).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Amit Gandhi
OpenCDN is an abandoned project, although there have been a few attempts at 
creating one called “OpenCDN”.

As far as the we can tell (Poppy Team), there is currently no viable Open 
source CDN’s available.  If there is we would be happy to add it as a supported 
driver.

Also, even if the CDN software itself was open, the true value of a CDN comes 
from the distributed global network of servers it offers and its performance to 
serve requests.  Not just the features/API offered.

Poppy intends to be an abstraction API over the various CDNs available.  We do 
not want to be in the business of building a CDN itself.


Amit.



On Feb 16, 2016, at 12:22 PM, Doug Hellmann 
> wrote:

Excerpts from Sean M. Collins's message of 2016-02-16 07:15:34 +:
Thomas Goirand wrote:
Oh, that, and ... not using CassandraDB. And yes, this thread is a good
place to have this topic. I'm not sure who replied to me this thread
wasn't the place to discuss it: I respectfully disagree, since it's
another major blocker, IMO as important, if not more, as using a free
software CDN solution.

Let's handle the policy implications discussed in this thread before we
dive into the "don't use this component that I dislike" bikeshed.
Reading the thread, it appears that we've made good progress on building
consensus towards having Poppy consider an open source CDN as the
"reference implementation" (to use some Neutron parlance).

Then we can bikeshed about how good/bad the components used in the
reference implementation are. Later. The point being, there is an open
source solution that will be used to flesh out a true vendor-neutral API
(as I understand Mike Perez's position, and agree with!).

Is there? I thought the point was OpenCDN isn't actually usable. Maybe
someone from the Poppy team can provide more details about that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for papers talk gone astray?

2016-02-16 Thread Bailey, Darragh
Hi,


Submitted a talk for the upcoming summit (or at least I thought did
everything required for it to be included), however it appears to have
not appeared in the voting process. Title was "Practical Ansible hacks
used to deploy an OpenStack solution"

Looking around I've discovered that there is a problem with my profile
on the Openstack site, in that the public profile link,
https://www.openstack.org/community/members/profile/867, returns a 404
for me. Occam's razor suggests the two are related.


Anyone able to help out locate where the talk disappeared to as well as
fixing my profile before the voting closes?

-- 
Regards,
Darragh Bailey
IRC: electrofelix
"Nothing is foolproof to a sufficiently talented fool" - Unknown




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Thomas Goirand
On 02/16/2016 03:15 PM, Sean M. Collins wrote:
> Thomas Goirand wrote:
>> Oh, that, and ... not using CassandraDB. And yes, this thread is a good
>> place to have this topic. I'm not sure who replied to me this thread
>> wasn't the place to discuss it: I respectfully disagree, since it's
>> another major blocker, IMO as important, if not more, as using a free
>> software CDN solution.
> 
> Let's handle the policy implications discussed in this thread before we
> dive into the "don't use this component that I dislike" bikeshed.

s/I dislike/is not free software/ [*]

It's not a mater of taste. Having Poppy requiring a non-free component,
even indirectly (ie: the Oracle JVM that CassandraDB needs), makes it
non-free.

Ensuring we really only accept free software is not a bikeshed color
discussion, it is really important. And that's the same topic as using
non-free CDN solution (see below).

On 02/11/2016 08:03 PM, Flavio Percoco wrote:
> On 11/02/16 17:31 +0800, Thomas Goirand wrote:
>> I'm not sure who replied to me this thread
>> wasn't the place to discuss it: I respectfully disagree
>
> This thread is to talk about the *open core* issue.

Yeah, correct: open core. So we're discussing the freeness of software
we can accept. That's right on topic to me. Its dependencies, being
remote (ie: commercial CDNs) *OR* local (CassandraDB, and therefore the
Oracle JVM) are both non-free. And in fact, local dependencies which
someone would actually need to install on his computer, are even more
important than the remote ones.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Clark Boylan
On Tue, Feb 16, 2016, at 01:07 AM, Jordan Pittier wrote:
> Hi list,
> I understood we need to limit the number of tests and jobs that are run
> for
> each Tempest patch because our resources are not unlimited.
> 
> In Tempest, we have 5 multinode experimental jobs:
> 
> experimental-tempest-dsvm-multinode-full-dibtest
> gate-tempest-dsvm-multinode-full
> gate-tempest-dsvm-multinode-live-migration
> gate-tempest-dsvm-neutron-multinode-full
> gate-tempest-dsvm-neutron-dvr-multinode-full
> 
> These jobs largely overlap with the non-multinode jobs. What about
> tagging
> (with a python decorator) each test that really requires multiple nodes
> and
> only run those tests as part of the multinode jobs ?

One of the goals I had was to hopefully replace the single node jobs
with the multinode jobs because as you point out there is a lot of
redundancy and 2 VMs < 3 VMs. One of the prerequisites for this to
happen is to have an easy way to reproduce the multinode test envs using
something like vagrant. I have been meaning to work on that this cycle
but adding new cloud resources (and keeping existing resources happy)
have taken priority.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-16 Thread Doug Hellmann
Excerpts from Sean M. Collins's message of 2016-02-16 07:15:34 +:
> Thomas Goirand wrote:
> > Oh, that, and ... not using CassandraDB. And yes, this thread is a good
> > place to have this topic. I'm not sure who replied to me this thread
> > wasn't the place to discuss it: I respectfully disagree, since it's
> > another major blocker, IMO as important, if not more, as using a free
> > software CDN solution.
> 
> Let's handle the policy implications discussed in this thread before we
> dive into the "don't use this component that I dislike" bikeshed.
> Reading the thread, it appears that we've made good progress on building
> consensus towards having Poppy consider an open source CDN as the
> "reference implementation" (to use some Neutron parlance).
> 
> Then we can bikeshed about how good/bad the components used in the
> reference implementation are. Later. The point being, there is an open
> source solution that will be used to flesh out a true vendor-neutral API
> (as I understand Mike Perez's position, and agree with!).

Is there? I thought the point was OpenCDN isn't actually usable. Maybe
someone from the Poppy team can provide more details about that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-16 Thread Walter A. Boring IV

On 02/12/2016 04:35 PM, John Griffith wrote:



On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV 
> wrote:


There seems to be a few discussions going on here wrt to
detaches.   One is what to do on the Nova side with calling
os-brick's disconnect_volume, and also when to or not to call
Cinder's terminate_connection and detach.

My original post was simply to discuss a mechanism to try and
figure out the first problem.  When should nova call brick to remove
the local volume, prior to calling Cinder to do something.
​


Nova needs to know if it's safe to call disconnect_volume or not.
Cinder already tracks each attachment, and it can return the
connection_info for each attachment with a call to
initialize_connection.   If 2 of those connection_info dicts are
the same, it's a shared volume/target.  Don't call
disconnect_volume if there are any more of those left.

On the Cinder side of things, if terminate_connection, detach is
called, the volume manager can find the list of attachments for a
volume, and compare that to the attachments on a host.  The
problem is, Cinder doesn't track the host along with the
instance_uuid in the attachments table.  I plan on allowing that
as an API change after microversions lands, so we know how many
times a volume is attached/used on a particular host.  The driver
can decide what to do with it at terminate_connection, detach
time. This helps account for
the differences in each of the Cinder backends, which we will
never get all aligned to the same model.  Each array/backend
handles attachments different and only the driver knows if it's
safe to remove the target or not, depending on how many
attachments/usages it has
on the host itself.   This is the same thing as a reference
counter, which we don't need, because we have the count in the
attachments table, once we allow setting the host and the
instance_uuid at the same time.

​ Not trying to drag this out or be difficult I promise. But, this 
seems like it is in fact the same problem, and I'm not exactly 
following; if you store the info on the compute side during the attach 
phase, why would you need/want to then create a split brain scenario 
and have Cinder do any sort of tracking on the detach side of things?


Like the earlier posts said, just don't call terminate_connection if 
you don't want to really terminate the connection?  I'm sorry, I'm 
just not following the logic of why Cinder should track this and 
interfere with things?  It's supposed to be providing a service to 
consumers and "do what it's told" even if it's told to do the wrong thing.


The only reason to store the connector information on the cinder 
attachments side is in the few use cases when there is no way to get 
that connector any more.  Such as the case for nova evacuate, and force 
detach where nova has no information about where the original attachment 
was, because the instance is gone.   Cinder backends still need the 
connector at terminate_connection time, to find the right 
exports/targets to remove.


Walt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Clint Byrum
Excerpts from Andrew Laski's message of 2016-02-16 06:34:53 -0800:
> 
> On Tue, Feb 16, 2016, at 07:54 AM, Alex Xu wrote:
> >
> >
> > 2016-02-16 19:53 GMT+08:00 Sean Dague :
> >> On 02/12/2016 03:55 PM, Andrew Laski wrote:
> >>
> > Starting a new thread to continue a thought that came up in
> >>
> > http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
> >>
> > The Nova API microversion framework allows for backwards
> > compatible and
> >>
> > backwards incompatible changes but there is no way to programmatically
> >>
> > distinguish the two. This means that as a user of the API I need to
> >>
> > understand every change between the version I'm using now and a new
> >>
> > version I would like to move to in case an intermediate version
> > changes
> >>
> > default behaviors or removes something I'm currently using.
> >>
> >
> >>
> > I would suggest that a more user friendly approach would be to
> >>
> > distinguish the two types of changes. Perhaps something like
> > 2.x.y where
> >>
> > x is bumped for a backwards incompatible change and y is still
> >>
> > monotonically increasing regardless of bumps to x. So if the current
> >>
> > version is 2.2.7 a new backwards compatible change would bump to 2.2.8
> >>
> > or a new backwards incompatible change would bump to 2.3.8. As a user
> >>
> > this would allow me to fairly freely bump the version I'm consuming
> >>
> > until x changes at which point I need to take more care in moving to a
> >>
> > new version.
> >>
> >
> >>
> > Just wanted to throw the idea out to get some feedback. Or
> > perhaps this
> >>
> > was already discussed and dismissed when microversions were
> > added and I
> >>
> > just missed it.
> >>
> >> Please no.
> >>
> >>
> We specifically stated many times that microversions aren't semver. Each
> >>
> version is just that.
> >>
> >>
> Semver only makes sense when you are always talking to one installation,
> >>
> and the version numbers can only increase. When your code retargets to
> >>
> multiple installations version numbers can very easily go backwards. So
> >>
> unless a change in compatible forward and backwards, it's a breaking
> >>
> change for someone.
> >
> > indeed, learned this point.
> 
> Fair enough, I wasn't thinking a lot about moving between installations
> just that we've hidden information within one installation.
> 
> Since any change except one that is backwards and forwards compatible is
> a breaking change for users of multiple clouds what is essentially being
> said is that we have a new API with every microversion. Given that I
> wonder if we shouldn't make a stronger statement that the API differs,
> as in why even have a 2. prefix which implies that 2.x has some relation
> to 2.x+1 when it doesn't.
> 
> It was mentioned elsewhere in the thread that we have a hard time
> knowing what's going to end up being compatible or not before it's
> released. This seems like something we should be able to determine and
> indicate somehow, even just through docs, otherwise we're passing that
> burden on to users to determine for themselves.
> 
> I very much like that microversions have enabled development to move
> forward on the API without the mess of extensions that we had
> previously. I fear that we have no real measurement of the cost of
> consuming the API under this new scheme. It's easy enough to think that
> users will just read the docs and carefully consider every version
> increment that they want to consume but when they've been on version 2.7
> for a while and a new thing comes out in 2.35 that they want they need
> to fully digest the implications of all 27 intervening versions purely
> through docs and with the understanding that literally almost anything
> about the semantics can have changed. So while I love the freedom that
> it provides to developers I think it would be useful to have a small set
> of constraints in place that helps users. Of course all of my ideas have
> been duds so far and perhaps that's because I'm imagining future
> scenarios that won't come to pass or that we don't care about. But
> something has me concerned and I can't quite get my finger on it.
> 

It's a trade-off I've wrestled with before, but I think Sean is right,
and the reasons for not doing semver and making compatibility judgements
are still valid. As a user, I would like to have a definitive "this is
how I talk to Nova" version, and that is provided. What it means for
long term viability of that version, I don't know.

What I do think one could _absolutely_ do to build on top of the current
microversion guarantee is write a more pedantic test suite that tries
to use the latest microversion with the previous working microversion's
semantics. On break, that would flag that version for a doc change
and a new version fork of said test suite. Doing that would provide a
monotonically advancing series of micro version windows that represent
compatibility with eachother for at least 

Re: [openstack-dev] [Openstack-operators] How are consumers/operators new to openstack supposed to know about upper-constraints?

2016-02-16 Thread Sean Dague
On 02/16/2016 11:48 AM, Ian Cordasco wrote:
>  
> 
> -Original Message-
> From: Matt Riedemann 
> Reply: Matt Riedemann 
> Date: February 16, 2016 at 09:30:49
> To: OpenStack Development Mailing List (not for usage questions) 
> , openstack-operat...@lists.openstack.org 
> 
> Subject:  [Openstack-operators] How are consumers/operators new to openstack 
> supposed to know about upper-constraints?
> 
>> We have a team just upgrading to Liberty and they are having problems.
>> While running down their list of packages they are using, I noticed they
>> have os-brick 0.8.0 which is the latest version (from mitaka).
>>  
>> However, os-brick in stable/liberty upper-constraints is at 0.6.0 [1].
>>  
>> So while I don't think their immediate problems are due to using an
>> untested version of os-brick on stable/liberty, they are obviously just
>> picking up the latest versions of dependencies because they aren't
>> capped in requirements. That could eventually bite them because there
>> are things that don't work together in liberty depending on what
>> versions you have [2].
>>  
>> My main question is, how are we expecting consumers/deployers of
>> openstack to know about the upper-constraints file? Where is that
>> advertised in the manuals?
>>  
>> There is nothing in the Liberty release notes [3].
>>  
>> I'm sure there is probably something in the openstack/requirements repo
>> devref, but I wouldn't expect a deployer to know that repo exists let
>> alone to go off and read it's docs and understand how it applies to them
>> (a lot of openstack developers probably don't know about the reqs repo
>> or what it does).
>>  
>> Does the operator community have any tips or know something that I
>> don't? I think ops people that have been around awhile are just aware
>> because it's been coming for a few releases now so they are aware of the
>> magical unicorn and have sought out info on it, but what about new
>> deployments?
>>  
>> [1]
>> https://github.com/openstack/requirements/blob/0e8a4136b4e9e91293d46b99879c966e3bddd9bd/upper-constraints.txt#L181
>>   
>> [2] https://bugs.launchpad.net/oslo.service/+bug/1529594
>> [3] https://wiki.openstack.org/wiki/ReleaseNotes/Liberty
> 
> This is actually a good question. I think some assumptions were made about 
> how people are deploying OpenStack. I think those assumptions are along the 
> lines of:
> 
> - Operators are deploying with downstream packages (from Ubuntu, Red Hat, 
> etc.)
> - Operators are using something like the Chef Cookbooks, Puppet Modules, or 
> the Ansible Playbooks that ideally handle all of this for them.
> 
> I know OpenStack Ansible takes upper-constraints into consideration when it's 
> building wheel repositories for dependencies. I would guess that the other 
> deployment projects do something similar or also rest upon the usage of 
> downstream packages. I think we (the developers) tend to think that anyone 
> not using downstream packages is doing it wrong since they handle dependency 
> management for us (as presented to the end user).
> 
> I'm not sure what the right solution will be because I would be surprised if 
> some more explicit form of upper-constraints present in requirements of each 
> project would be argued against as too much work/specificity.

Also, churn. We got rid of this model of narrow ranges in repositories
for a reason, because it very quickly turns into an incompatible
collision between projects / libraries. Maybe if this only applied to
top level projects which could never be imported by others it would be
ok, I don't know.

> I also think this clearly demonstrates why upper caps (although painful) are 
> at least informational even when we have upper-constraints protecting the 
> gates.

I think due to the limitations on pip and python packaging we've largely
said (maybe too implictly) that you have to have an installer layer in
OpenStack, and it has to understand requirements at the install layer.

That might be distro packages. That might be any of the install projects
in the big tent (chef, puppet, ansible, fuel). That might be devstack.
But it's definitely something between you and 'pip install nova'.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Alarm generation from a sequence of events

2016-02-16 Thread Pradip Mukhopadhyay
Thanks, Roland. Makes sense. I agree.


Just tossing the idea: is there a way to validate a temporal causal
relationship among events such that in case e1, followed by e2, followed by
e3 *implies *a condition C to fail/pass - and generates an alarm.


Thanks,
Pradip



On Tue, Feb 16, 2016 at 8:52 PM, Hochmuth, Roland M  wrote:

> Hi Pradip, The focus of that blueprint is to create metrics from logs in
> the LogStash component and then publish them to the Kafka metrics topic.
> The Monasca Threshold Engine can then be used to alarm on the metrics. For
> example, if the number of errors in a log file exceeds some amount, alarm
> and send a notification.
>
> Regards --Roland
>
> From: Pradip Mukhopadhyay >
> Reply-To: OpenStack List >
> Date: Monday, February 15, 2016 at 9:58 AM
> To: OpenStack List >
> Subject: [openstack-dev] [Monasca] Alarm generation from a sequence of
> events
>
> Hello,
>
>
> We come across the following interesting BP in the last weekly meeting:
> https://blueprints.launchpad.net/monasca/+spec/alarmsonlogs
>
>
> Understood how the non-periodic nature of log events to be taken care of
> (by introducing period = -1 in value-meta).
>
>
> Just wondering can it be possible to use this to satisfy the following
> usecase:
> Given a causal or non-causal sequence of events a, b, c – can we determine
> that a condition ‘p’ has occurred and alarm on that.
>
>
>
> Thanks,
> Pradip
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance Image signing and verification

2016-02-16 Thread Benjamin, Bruce P.
All,
Here are operations guide instructions currently in review to add signed images 
and configure Nova to automatically check the signature prior to conditionally 
booting an image. https://review.openstack.org/#/c/245886/. These instructions 
are more up to date than the ones on the etherpad. Please take a look.  Thanks.

Bruce

> On 2/11/16 13:51:08 UTC 2016 Nikhil Komawar wrote:
> Hi Pankaj,
>
> Here's a example instruction set for that feature.
>
> https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
>
> Hope it helps.
>
>>On 2/11/16 8:45 AM, Pankaj Mishra wrote:
>>
>> Hi,
>>
>>
>>
>> I am new in OpenStack and I want to create image through glance CLI
>> and I am referring blueprint
>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support
>>  and I am using below mentioned command to create the image. So what
>> is the step for  Glance Image signing and verification by using glance
>> cli.
>>
>>
>>
>> glance --os-image-api-version 2 image-create [--architecture
>> >]
>> [--protected [True|False]] [--name ]
>> [--instance-uuid ]
>> [--min-disk ] [--visibility ]
>> [--kernel-id ]
>> [--tags  [ ...]]
>> [--os-version ]
>> [--disk-format ] [--self ]
>> [--os-distro ] [--id ]
>> [--owner ] [--ramdisk-id ]
>> [--min-ram ]
>> [--container-format ]
>> [--property 

[openstack-dev] How are consumers/operators new to openstack supposed to know about upper-constraints?

2016-02-16 Thread Matt Riedemann
We have a team just upgrading to Liberty and they are having problems. 
While running down their list of packages they are using, I noticed they 
have os-brick 0.8.0 which is the latest version (from mitaka).


However, os-brick in stable/liberty upper-constraints is at 0.6.0 [1].

So while I don't think their immediate problems are due to using an 
untested version of os-brick on stable/liberty, they are obviously just 
picking up the latest versions of dependencies because they aren't 
capped in requirements.  That could eventually bite them because there
are things that don't work together in liberty depending on what 
versions you have [2].


My main question is, how are we expecting consumers/deployers of 
openstack to know about the upper-constraints file?  Where is that 
advertised in the manuals?


There is nothing in the Liberty release notes [3].

I'm sure there is probably something in the openstack/requirements repo 
devref, but I wouldn't expect a deployer to know that repo exists let 
alone to go off and read it's docs and understand how it applies to them 
(a lot of openstack developers probably don't know about the reqs repo 
or what it does).


Does the operator community have any tips or know something that I 
don't?  I think ops people that have been around awhile are just aware 
because it's been coming for a few releases now so they are aware of the 
magical unicorn and have sought out info on it, but what about new 
deployments?


[1] 
https://github.com/openstack/requirements/blob/0e8a4136b4e9e91293d46b99879c966e3bddd9bd/upper-constraints.txt#L181

[2] https://bugs.launchpad.net/oslo.service/+bug/1529594
[3] https://wiki.openstack.org/wiki/ReleaseNotes/Liberty

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Alarm generation from a sequence of events

2016-02-16 Thread Hochmuth, Roland M
Hi Pradip, The focus of that blueprint is to create metrics from logs in the 
LogStash component and then publish them to the Kafka metrics topic. The 
Monasca Threshold Engine can then be used to alarm on the metrics. For example, 
if the number of errors in a log file exceeds some amount, alarm and send a 
notification.

Regards --Roland

From: Pradip Mukhopadhyay 
>
Reply-To: OpenStack List 
>
Date: Monday, February 15, 2016 at 9:58 AM
To: OpenStack List 
>
Subject: [openstack-dev] [Monasca] Alarm generation from a sequence of events

Hello,


We come across the following interesting BP in the last weekly meeting: 
https://blueprints.launchpad.net/monasca/+spec/alarmsonlogs


Understood how the non-periodic nature of log events to be taken care of (by 
introducing period = -1 in value-meta).


Just wondering can it be possible to use this to satisfy the following usecase:
Given a causal or non-causal sequence of events a, b, c – can we determine that 
a condition ‘p’ has occurred and alarm on that.



Thanks,
Pradip



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Magnum] Operation for COE

2016-02-16 Thread Hongbin Lu
Wanghua,

Please add your requests to the midcycle agenda [1], or bring it up in the team 
meeting under the open discussion. We can discuss it if agenda allows.

[1] https://etherpad.openstack.org/p/magnum-mitaka-midcycle-topics

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: February-16-16 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack][Magnum] Operation for COE

Hi all,

Should we add some operational function for COE in Magnum? For example, collect 
logs, upgrade COE and modify COE configuration. I think these features are very 
important in production.

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sean Dague
On 02/16/2016 09:34 AM, Andrew Laski wrote:
>  
>  
>  
> On Tue, Feb 16, 2016, at 07:54 AM, Alex Xu wrote:
>>  
>>  
>> 2016-02-16 19:53 GMT+08:00 Sean Dague > >:
>>
>> On 02/12/2016 03:55 PM, Andrew Laski wrote:
>> > Starting a new thread to continue a thought that came up in
>> > 
>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
>> > The Nova API microversion framework allows for backwards compatible and
>> > backwards incompatible changes but there is no way to programmatically
>> > distinguish the two. This means that as a user of the API I need to
>> > understand every change between the version I'm using now and a new
>> > version I would like to move to in case an intermediate version changes
>> > default behaviors or removes something I'm currently using.
>> >
>> > I would suggest that a more user friendly approach would be to
>> > distinguish the two types of changes. Perhaps something like 2.x.y 
>> where
>> > x is bumped for a backwards incompatible change and y is still
>> > monotonically increasing regardless of bumps to x. So if the current
>> > version is 2.2.7 a new backwards compatible change would bump to 2.2.8
>> > or a new backwards incompatible change would bump to 2.3.8. As a user
>> > this would allow me to fairly freely bump the version I'm consuming
>> > until x changes at which point I need to take more care in moving to a
>> > new version.
>> >
>> > Just wanted to throw the idea out to get some feedback. Or perhaps this
>> > was already discussed and dismissed when microversions were added and I
>> > just missed it.
>>
>> Please no.
>>  
>> We specifically stated many times that microversions aren't
>> semver. Each
>> version is just that.
>>  
>> Semver only makes sense when you are always talking to one
>> installation,
>> and the version numbers can only increase. When your code retargets to
>> multiple installations version numbers can very easily go
>> backwards. So
>> unless a change in compatible forward and backwards, it's a breaking
>> change for someone.
>>
>>  
>> indeed, learned this point.
>  
> Fair enough, I wasn't thinking a lot about moving between installations
> just that we've hidden information within one installation.
>  
> Since any change except one that is backwards and forwards compatible is
> a breaking change for users of multiple clouds what is essentially being
> said is that we have a new API with every microversion. Given that I
> wonder if we shouldn't make a stronger statement that the API differs,
> as in why even have a 2. prefix which implies that 2.x has some relation
> to 2.x+1 when it doesn't.

That was in the original conversations, to make it a monotonically
increasing single number. It got shot down somewhere in the middle of it
all, and I can't remember why now.

> It was mentioned elsewhere in the thread that we have a hard time
> knowing what's going to end up being compatible or not before it's
> released. This seems like something we should be able to determine and
> indicate somehow, even just through docs, otherwise we're passing that
> burden on to users to determine for themselves.
>  
> I very much like that microversions have enabled development to move
> forward on the API without the mess of extensions that we had
> previously. I fear that we have no real measurement of the cost of
> consuming the API under this new scheme. It's easy enough to think that
> users will just read the docs and carefully consider every version
> increment that they want to consume but when they've been on version 2.7
> for a while and a new thing comes out in 2.35 that they want they need
> to fully digest the implications of all 27 intervening versions purely
> through docs and with the understanding that literally almost anything
> about the semantics can have changed. So while I love the freedom that
> it provides to developers I think it would be useful to have a small set
> of constraints in place that helps users. Of course all of my ideas have
> been duds so far and perhaps that's because I'm imagining future
> scenarios that won't come to pass or that we don't care about. But
> something has me concerned and I can't quite get my finger on it.

I definitely understand that concern. And I also don't think we've
really come up with a good way of getting docs to users yet (which is
not hugely different than the extension problem before).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] More midcycle details

2016-02-16 Thread Jim Rollenhagen
We are live! Please do join :)

// jim

On Thu, Feb 11, 2016 at 05:20:27AM -0800, Jim Rollenhagen wrote:
> Hi all,
> 
> Our midcycle is next week! Here's everything you need to know.
> 
> First and foremost, please RSVP on the etherpad, and add any topics
> (with your name!) that you'd like to discuss.
> https://etherpad.openstack.org/p/ironic-mitaka-midcycle
> 
> Secondly, here are the time slots we'll be meeting at. All times UTC.
> February 16 15:00-20:00
> February 17 00:00-04:00
> February 17 15:00-20:00
> February 18 00:00-04:00
> February 18 15:00-20:00
> February 19 00:00-04:00
> 
> Our regular weekly meeting for February 15 is cancelled.
> 
> Communications: we'll be using VOIP, IRC, and etherpad.
> 
> The VOIP system is provided by the infra team. We'll be in room .
> More details: https://wiki.openstack.org/wiki/Infrastructure/Conferencing
> You may use a telephone or any SIP client to connect.
> We will not be officially recording the audio; however do note that I
> can't stop anyone from doing so.
> 
> We'll be using #openstack-sprint on Freenode for the main IRC channel
> for the meetup. Most of us will also be in #openstack-ironic.
> Please note that these channels are publically logged.
> 
> The main etherpad for the meetup is here:
> https://etherpad.openstack.org/p/ironic-mitaka-midcycle
> It also contains all of these details.
> We may start additional etherpads for some topics during the meeting;
> those will be linked from the main etherpad.
> 
> If you need help during the midcycle, here's a good list of people to
> ping in IRC, who will be at most of the time slots:
> * jroll (me)
> * devananda
> * jlvillal
> * TheJulia
> 
> If you have any questions or comments, please reply to this email or
> ping me directly in IRC.
> 
> Hope to see you all there! :)
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Andrew Laski



On Tue, Feb 16, 2016, at 07:54 AM, Alex Xu wrote:
>
>
> 2016-02-16 19:53 GMT+08:00 Sean Dague :
>> On 02/12/2016 03:55 PM, Andrew Laski wrote:
>>
> Starting a new thread to continue a thought that came up in
>>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
>>
> The Nova API microversion framework allows for backwards
> compatible and
>>
> backwards incompatible changes but there is no way to programmatically
>>
> distinguish the two. This means that as a user of the API I need to
>>
> understand every change between the version I'm using now and a new
>>
> version I would like to move to in case an intermediate version
> changes
>>
> default behaviors or removes something I'm currently using.
>>
>
>>
> I would suggest that a more user friendly approach would be to
>>
> distinguish the two types of changes. Perhaps something like
> 2.x.y where
>>
> x is bumped for a backwards incompatible change and y is still
>>
> monotonically increasing regardless of bumps to x. So if the current
>>
> version is 2.2.7 a new backwards compatible change would bump to 2.2.8
>>
> or a new backwards incompatible change would bump to 2.3.8. As a user
>>
> this would allow me to fairly freely bump the version I'm consuming
>>
> until x changes at which point I need to take more care in moving to a
>>
> new version.
>>
>
>>
> Just wanted to throw the idea out to get some feedback. Or
> perhaps this
>>
> was already discussed and dismissed when microversions were
> added and I
>>
> just missed it.
>>
>> Please no.
>>
>>
We specifically stated many times that microversions aren't semver. Each
>>
version is just that.
>>
>>
Semver only makes sense when you are always talking to one installation,
>>
and the version numbers can only increase. When your code retargets to
>>
multiple installations version numbers can very easily go backwards. So
>>
unless a change in compatible forward and backwards, it's a breaking
>>
change for someone.
>
> indeed, learned this point.

Fair enough, I wasn't thinking a lot about moving between installations
just that we've hidden information within one installation.

Since any change except one that is backwards and forwards compatible is
a breaking change for users of multiple clouds what is essentially being
said is that we have a new API with every microversion. Given that I
wonder if we shouldn't make a stronger statement that the API differs,
as in why even have a 2. prefix which implies that 2.x has some relation
to 2.x+1 when it doesn't.

It was mentioned elsewhere in the thread that we have a hard time
knowing what's going to end up being compatible or not before it's
released. This seems like something we should be able to determine and
indicate somehow, even just through docs, otherwise we're passing that
burden on to users to determine for themselves.

I very much like that microversions have enabled development to move
forward on the API without the mess of extensions that we had
previously. I fear that we have no real measurement of the cost of
consuming the API under this new scheme. It's easy enough to think that
users will just read the docs and carefully consider every version
increment that they want to consume but when they've been on version 2.7
for a while and a new thing comes out in 2.35 that they want they need
to fully digest the implications of all 27 intervening versions purely
through docs and with the understanding that literally almost anything
about the semantics can have changed. So while I love the freedom that
it provides to developers I think it would be useful to have a small set
of constraints in place that helps users. Of course all of my ideas have
been duds so far and perhaps that's because I'm imagining future
scenarios that won't come to pass or that we don't care about. But
something has me concerned and I can't quite get my finger on it.


>
>>
>>
-Sean
>>
>>
--
>> 
Sean Dague
>> http://dague.net
>>  
>> 
__
>> 
OpenStack Development Mailing List (not for usage questions)
>> 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sylvain Bauza



Le 16/02/2016 13:01, Sean Dague a écrit :

On 02/16/2016 03:33 AM, Sylvain Bauza wrote:


Le 16/02/2016 09:30, Sylvain Bauza a écrit :


Le 16/02/2016 04:09, Alex Xu a écrit :


2016-02-16 9:47 GMT+08:00 GHANSHYAM MANN >:

 Regards
 Ghanshyam Mann


 On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu > wrote:
 > If we support 2.x.y, when we bump 'x' is a problem. We didn't
 order the API
 > changes for now, the version of API change is just based on the
 order of
 > patch merge. For support 2.x.y, we need bump 'y' first for
 back-compatible
 > changes I guess.
 >
 > As I remember, we said before, the new feature is the
 motivation of user
 > upgrade their client to support new version API, whatever the
 new version is
 > backward compatible or incompatible. So I guess the initial
 thinking we hope
 > user always upgrade their code than always stop at old version?
 If we bump
 > 'x' after a lot of 'y', will that lead to user always stop at
 'x' version?
 > And the evolution of api will slow down.
 >
 > Or we limit to each release cycle. In each release, we bump 'y'
 first, and
 > then bump 'x'. Even there isn't any back-incompatible change in
 the release.
 > We still bump 'x' when released. Then we can encourage user
 upgrade their
 > code. But I still think the back-incompatible API change will
 be slow down
 > in development, as it need always merged after back-compatible
 API change
 > patches.

 Yea that true and will be more complicated from development
 perspective which leads to slow down the evolution of API changes.
 But if we support x.y then still we can change x at any time back
 in-comp changes happens(i mean before y also)? Or I may not be
 getting
 the issue you mentioned about always bump y before x.


If the back-incompatible change merged before back-compatible change,
then 'y' become useless. For example, the initial version is 2.1.0,
then we have 3 back-comp and 3 in-comp changes, and we are unlucky,
in-comp changes merged first, then we get version 2.4.3, then if user
want to use those back-comp changes, it still need upgrade those 3
in-comp changes.
  



 I like the idea of distinguish the backward comp and in-comp changes
 with x and y which always gives clear perspective about changes.
 But it should not lead users to ignore y. I mean some backward comp
 changes which are really good gets ignored by users as they start
 look
 at the x only.
 For example- "adding attribute in resource representation" is back
 comp change (if so) and if that is added as y then, it might get
 ignored by users.

 Another way to clearly distinguish backward comp and in-comp changes
 is through documentation which was initially discussed during
 microversion specs. Currently doc has good description about each
 changes but not much clear way about backward comp or not.
 Which we can do by adding a clear flag [Backward Compatible/
 Incompatible] for each version in doc [1]-


+1 for doc the change is backward comp or not.

I'm not usually good at thinking API references, but something pinged
my brain so lemme know if that's terrible or not.
Why not semantically say that :
  - if the API microversion is a ten, then it's a non-backwards
compatible change
  - if not, it's backwards-compatible

If you are like with the version #29 and add a new
backwards-compatible version, then it would be #31 (and not #30).

That way, you would still have a monotonic increase, which I think was
an agreement when discussing about microversioning, but it would help
the users which would know the semantics and just look whether a ten
is between the version they use and the version they want (and if so,
if it was implemented).

Call me dumb, it's just a thought.
-Sylvain


One slight improvement could be to consider hundreds and not tens for
major versions. That would leave 99 'minor' versions between majors,
which I think is doable.

No, please no.

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/ - the
backwards compatibility fallacy. We've been here before during the
design of the system. Declaring additional semantic meaning to version
numbers is just complexity for very little gain.

It also means we're going to have another hard to figure out thing with
every change. Is this backwards compatible or not? Remember, we have
strict json schema checking, so adding a parameter is not backwards
compatible.

Everyone thinks they know what's a compatible change, until they
release, then break people using the interface in ways that were
unexpected (see: weekly breaks from pypi).

The expectation is that client programs are going to do the following:

GLOBAL_COMPUTE_VERSION = 2.15

... write lots of 

Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Alex Xu
2016-02-16 19:53 GMT+08:00 Sean Dague :

> On 02/12/2016 03:55 PM, Andrew Laski wrote:
> > Starting a new thread to continue a thought that came up in
> >
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html
> .
> > The Nova API microversion framework allows for backwards compatible and
> > backwards incompatible changes but there is no way to programmatically
> > distinguish the two. This means that as a user of the API I need to
> > understand every change between the version I'm using now and a new
> > version I would like to move to in case an intermediate version changes
> > default behaviors or removes something I'm currently using.
> >
> > I would suggest that a more user friendly approach would be to
> > distinguish the two types of changes. Perhaps something like 2.x.y where
> > x is bumped for a backwards incompatible change and y is still
> > monotonically increasing regardless of bumps to x. So if the current
> > version is 2.2.7 a new backwards compatible change would bump to 2.2.8
> > or a new backwards incompatible change would bump to 2.3.8. As a user
> > this would allow me to fairly freely bump the version I'm consuming
> > until x changes at which point I need to take more care in moving to a
> > new version.
> >
> > Just wanted to throw the idea out to get some feedback. Or perhaps this
> > was already discussed and dismissed when microversions were added and I
> > just missed it.
>
> Please no.
>
> We specifically stated many times that microversions aren't semver. Each
> version is just that.
>
> Semver only makes sense when you are always talking to one installation,
> and the version numbers can only increase. When your code retargets to
> multiple installations version numbers can very easily go backwards. So
> unless a change in compatible forward and backwards, it's a breaking
> change for someone.
>

indeed, learned this point.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] About how to make the vm use volume with libiscsi

2016-02-16 Thread Xiao Ma (xima2)
Hi, All

I want to make the qemu communicate with iscsi target using libiscsi directly, 
and I
followed https://review.openstack.org/#/c/135854/ to add
'volume_drivers = iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’ in 
nova.conf
 and then restarted nova services and cinder services, but still the volume 
configuration of vm is as bellow:


  
  
  
  076bb429-67fd-4c0c-9ddf-0dc7621a975a
  



I use centos7 and Liberty version of OpenStack.
Could anybody tell me how can I achieve it?


Thanks.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-02-16 Thread Sean Dague
On 02/15/2016 02:48 PM, Ivan Kolodyazhny wrote:
> Hi all,
> 
> I'll talk mostly about python-cinderclient but the same question could
> be related for other clients.
> 
> Now, for python-cinderclient we've got to kinds for
> functional/integrated jobs:
> 
> 1) gate-cinderclient-dsvm-functional - a very limited (for now) set of
> functional tests, most of them were part of tempest CLI tests in the past.
> 
> 2) gate-tempest-dsvm-neutron-src-python-cinderclient - if I understand
> right, the idea os this job was to have integrated tests to test
> cinderclient with other projects to verify that new patch to
> python-cinderclietn won't break any other project.
> But it does *not* test cinderclient at all, except few attach-related
> tests because Tempest doesn't use python-*client.

This does test the real world usage of Nova consuming
python-cinderclient. That's why it's still there. This ensures that a
cinderclient upcoming release won't completely tank the integrated gate.
All openstack libraries that get used by all servers in openstack have
something equivalent.

> The same job was added for python-heatclient but was removed because
> devstack didn't install Heat for that job [1].
> 
> We agreed [2] to remove this job from cinderclient gates too, once
> functional or integration tests will be implemented.

Um, what now?

> There is a proposal to python-cinderclient tests to implement some
> cross-project testing to make sure, that new python-cinderclient won't
> break any of existing project who use it. 
> 
> After discussing in IRC with John Griffith (jgriffith) I'm realized that
> it could be an cross-project initiative in such kind of integration
> tests. OpenStack Client (OSC) could cover some part of such tests, but
> does it mean that we'll run OSC tests on every patch to python-*client?
> We can run only cinder-realated OSC tests on our gates to verify that it
> doesn't breack OSC and, may be other project.
> 
> The other option, is to implement tests like [3] per project basis and
> call it "integration".  Such tests could cover more cases than OSC
> functional tests and have more project-related test cases, e.g.: test
> some python-cinderclient specific corner cases, which is not related to OSC.
> 
> IMO, It would be good to have some cross-project decision on how will be
> implement clients' integration tests per project.
> 
> 
> [1] https://review.openstack.org/#/c/272411/
> [2]
> http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-12-16-16.00.log.html
> [3] https://review.openstack.org/#/c/279432/8
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sean Dague
On 02/16/2016 03:33 AM, Sylvain Bauza wrote:
> 
> 
> Le 16/02/2016 09:30, Sylvain Bauza a écrit :
>>
>>
>> Le 16/02/2016 04:09, Alex Xu a écrit :
>>>
>>>
>>> 2016-02-16 9:47 GMT+08:00 GHANSHYAM MANN >> >:
>>>
>>> Regards
>>> Ghanshyam Mann
>>>
>>>
>>> On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu >> > wrote:
>>> > If we support 2.x.y, when we bump 'x' is a problem. We didn't
>>> order the API
>>> > changes for now, the version of API change is just based on the
>>> order of
>>> > patch merge. For support 2.x.y, we need bump 'y' first for
>>> back-compatible
>>> > changes I guess.
>>> >
>>> > As I remember, we said before, the new feature is the
>>> motivation of user
>>> > upgrade their client to support new version API, whatever the
>>> new version is
>>> > backward compatible or incompatible. So I guess the initial
>>> thinking we hope
>>> > user always upgrade their code than always stop at old version?
>>> If we bump
>>> > 'x' after a lot of 'y', will that lead to user always stop at
>>> 'x' version?
>>> > And the evolution of api will slow down.
>>> >
>>> > Or we limit to each release cycle. In each release, we bump 'y'
>>> first, and
>>> > then bump 'x'. Even there isn't any back-incompatible change in
>>> the release.
>>> > We still bump 'x' when released. Then we can encourage user
>>> upgrade their
>>> > code. But I still think the back-incompatible API change will
>>> be slow down
>>> > in development, as it need always merged after back-compatible
>>> API change
>>> > patches.
>>>
>>> Yea that true and will be more complicated from development
>>> perspective which leads to slow down the evolution of API changes.
>>> But if we support x.y then still we can change x at any time back
>>> in-comp changes happens(i mean before y also)? Or I may not be
>>> getting
>>> the issue you mentioned about always bump y before x.
>>>
>>>
>>> If the back-incompatible change merged before back-compatible change,
>>> then 'y' become useless. For example, the initial version is 2.1.0,
>>> then we have 3 back-comp and 3 in-comp changes, and we are unlucky,
>>> in-comp changes merged first, then we get version 2.4.3, then if user
>>> want to use those back-comp changes, it still need upgrade those 3
>>> in-comp changes.
>>>  
>>>
>>>
>>> I like the idea of distinguish the backward comp and in-comp changes
>>> with x and y which always gives clear perspective about changes.
>>> But it should not lead users to ignore y. I mean some backward comp
>>> changes which are really good gets ignored by users as they start
>>> look
>>> at the x only.
>>> For example- "adding attribute in resource representation" is back
>>> comp change (if so) and if that is added as y then, it might get
>>> ignored by users.
>>>
>>> Another way to clearly distinguish backward comp and in-comp changes
>>> is through documentation which was initially discussed during
>>> microversion specs. Currently doc has good description about each
>>> changes but not much clear way about backward comp or not.
>>> Which we can do by adding a clear flag [Backward Compatible/
>>> Incompatible] for each version in doc [1]-
>>>
>>>
>>> +1 for doc the change is backward comp or not.
>>
>> I'm not usually good at thinking API references, but something pinged
>> my brain so lemme know if that's terrible or not.
>> Why not semantically say that :
>>  - if the API microversion is a ten, then it's a non-backwards
>> compatible change
>>  - if not, it's backwards-compatible
>>
>> If you are like with the version #29 and add a new
>> backwards-compatible version, then it would be #31 (and not #30).
>>
>> That way, you would still have a monotonic increase, which I think was
>> an agreement when discussing about microversioning, but it would help
>> the users which would know the semantics and just look whether a ten
>> is between the version they use and the version they want (and if so,
>> if it was implemented).
>>
>> Call me dumb, it's just a thought.
>> -Sylvain
>>
> 
> One slight improvement could be to consider hundreds and not tens for
> major versions. That would leave 99 'minor' versions between majors,
> which I think is doable.

No, please no.

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/ - the
backwards compatibility fallacy. We've been here before during the
design of the system. Declaring additional semantic meaning to version
numbers is just complexity for very little gain.

It also means we're going to have another hard to figure out thing with
every change. Is this backwards compatible or not? Remember, we have
strict json schema checking, so adding a parameter is not backwards
compatible.

Everyone thinks they know 

Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sean Dague
On 02/12/2016 03:55 PM, Andrew Laski wrote:
> Starting a new thread to continue a thought that came up in
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
> The Nova API microversion framework allows for backwards compatible and
> backwards incompatible changes but there is no way to programmatically
> distinguish the two. This means that as a user of the API I need to
> understand every change between the version I'm using now and a new
> version I would like to move to in case an intermediate version changes
> default behaviors or removes something I'm currently using.
> 
> I would suggest that a more user friendly approach would be to
> distinguish the two types of changes. Perhaps something like 2.x.y where
> x is bumped for a backwards incompatible change and y is still
> monotonically increasing regardless of bumps to x. So if the current
> version is 2.2.7 a new backwards compatible change would bump to 2.2.8
> or a new backwards incompatible change would bump to 2.3.8. As a user
> this would allow me to fairly freely bump the version I'm consuming
> until x changes at which point I need to take more care in moving to a
> new version.
> 
> Just wanted to throw the idea out to get some feedback. Or perhaps this
> was already discussed and dismissed when microversions were added and I
> just missed it.

Please no.

We specifically stated many times that microversions aren't semver. Each
version is just that.

Semver only makes sense when you are always talking to one installation,
and the version numbers can only increase. When your code retargets to
multiple installations version numbers can very easily go backwards. So
unless a change in compatible forward and backwards, it's a breaking
change for someone.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-16 Thread Sean Dague
This was needed originally for ec2 support (which requires an integer
id). It's not really the db index per say, just another id value which
is valid (though hidden) for the server.

Before unwinding this issue we *must* make sure that the openstack/ec2
project does not need access to it.

On 02/15/2016 09:36 PM, Alex Xu wrote:
> I don't think our API supports get servers by DB index is good idea. So
> I prefer we remove it in the future with microversions. But for now,
> yes, it is here.
> 
> 2016-02-16 8:03 GMT+08:00 少合冯  >:
> 
> I guess others may ask the same questions. 
> 
> I read the nova API doc: 
> such as this API: 
> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
> 
> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
> *Show server details*
> 
> 
> *Request parameters*
> Parameter Style   TypeDescription
> tenant_id URI csapi:UUID  
> 
> The UUID of the tenant in a multi-tenancy cloud.
> 
> server_id URI csapi:UUID  
> 
> The UUID of the server.
> 
> 
> But I can get the server by DB index: 
> 
> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
> {
> "server": {
> "OS-DCF:diskConfig": "MANUAL",
> "OS-EXT-AZ:availability_zone": "nova",
> "OS-EXT-SRV-ATTR:host": "shaohe1",
> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
> "OS-EXT-STS:power_state": 1,
> "OS-EXT-STS:task_state": "migrating",
> "OS-EXT-STS:vm_state": "error",
> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
> "OS-SRV-USG:terminated_at": null,
> ..
> }
> }
> 
> and the code really allow it use  DB index
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Contributor Awards

2016-02-16 Thread Tom Fifield

Hi all,

I'd like to introduce a new round of community awards handed out by the 
Foundation, to be presented at the feedback session of the summit.


Nothing flashy or starchy - the idea is that these are to be a little 
informal, quirky ... but still recognising the extremely valuable work 
that we all do to make OpenStack excel.


There's so many different areas worthy of celebration, but we think that 
there's a few main chunks of the community that need a little love,


* Those who might not be aware that they are valued, particularly new 
contributors

* Those who are the active glue that binds the community together
* Those who share their hard-earned knowledge with others and mentor
* Those who challenge assumptions, and make us think

Since it's first time (recently, at least), rather than starting with a 
defined set of awards, we'd like to have submissions of names in those 
broad categories. Then we'll have a little bit of fun on the back-end 
and try to come up with something that isn't just your standard set of 
award titles, and iterate to success ;)


The submission form is here, so please submit anyone who you think is 
deserving of an award!




https://docs.google.com/forms/d/1HP1jAobT-s4hlqZpmxoGIGTxZmY6lCWolS3zOq8miDk/viewform



in the meantime, let's use this thread to discuss the fun part: goodies. 
What do you think we should lavish award winners with? Soft toys? 
Perpetual trophies? baseball caps ?



Regards,


Tom, on behalf of the Foundation team



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-16 Thread Alex Xu
I think I miss one problem. I didn't found anywhere user can get the DB
index from nova API. So even the 'show' action have the ability to show
with DB index, but actually nobody can use that, due to there isn't a way
to query the DB index for a server, except the user query the DB directly.
For this reason, I think we can remove it without microversion.

2016-02-16 16:18 GMT+08:00 Chen CH Ji :

> +1 for no microversion, it's internal implementation and we should be free
> to remove it since we didn't document it anywhere


>
>
> -GHANSHYAM MANN  wrote: -
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> From: GHANSHYAM MANN 
> Date: 02/16/2016 03:49AM
> Subject: Re: [openstack-dev] [Nova][API] Does nova API allow the server_id
> parem as DB index?
>
>
> Yes, currently Nova support that for show/update/delete server APIs etc
> (both v2 and v2.1) and python-novaclient too. But I think that was old
> behaviour and for ec2 API mainly?
>
> I searched on ec2 repo [1] and they get the instance from nova using UUID,
> i did not find any place they are fetching using id. Hut not sure if
> external interface directly fetch that on nova by 'id'.
>
> But apart from that, may be some users using 'id' instead of 'uuid' but
> that was not recommended or documented anywhere So in that case can we
> remove this old behaviour without version bump?
>
>
> [1].. https://github.com/openstack/ec2-api
>
> Regards
> Ghanshyam Mann
>
> On Tue, Feb 16, 2016 at 11:24 AM, Anne Gentle <
> annegen...@justwriteclick.com> wrote:
>
>>
>>
>> On Mon, Feb 15, 2016 at 6:03 PM, 少合冯  wrote:
>>
>>> I guess others may ask the same questions.
>>>
>>> I read the nova API doc:
>>> such as this API:
>>> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>>>
>>> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
>>> *Show server details*
>>>
>>>
>>> *Request parameters*
>>> ParameterStyleTypeDescription
>>> tenant_id URI csapi:UUID
>>>
>>> The UUID of the tenant in a multi-tenancy cloud.
>>> server_id URI csapi:UUID
>>>
>>> The UUID of the server.
>>>
>>> But I can get the server by DB index:
>>>
>>> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
>>> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
>>> {
>>> "server": {
>>> "OS-DCF:diskConfig": "MANUAL",
>>> "OS-EXT-AZ:availability_zone": "nova",
>>> "OS-EXT-SRV-ATTR:host": "shaohe1",
>>> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
>>> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
>>> "OS-EXT-STS:power_state": 1,
>>> "OS-EXT-STS:task_state": "migrating",
>>> "OS-EXT-STS:vm_state": "error",
>>> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
>>> "OS-SRV-USG:terminated_at": null,
>>> ..
>>> }
>>> }
>>>
>>> and the code really allow it use  DB index
>>> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>>>
>>>
>> Nice find. Can you log this as an API bug and we'll triage it -- can even
>> help you fix it on the site if you like.
>>
>> https://bugs.launchpad.net/openstack-api-site/+filebug
>>
>> Basically, click that link, write a short summary, then copy and paste in
>> this email's contents, it has lots of good info.
>>
>> Let me know if you'd also like to fix the bug on the site.
>>
>> And hey nova team, if you think it's actually an API bug, we'll move it
>> over to you.
>>
>> Thanks for reporting it!
>> Anne
>>
>>
>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Anne Gentle
>> Rackspace
>> Principal Engineer
>> www.justwriteclick.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack 

Re: [openstack-dev] [Nova] notification subteam meeting

2016-02-16 Thread Balázs Gibizer
Hi, 

The next meeting of the nova notification subteam will happen 2016-02-16 
Tuesday _17:00_ UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
- Status of the outstanding specs and code reviews
- AOB

See you there.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160216T20  
[2] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel plugins: lets have some rules

2016-02-16 Thread Evgeniy L
Hi,

I have some comments on CI for plugins, currently there is a good
instruction on how to install your own CI and using fuel-qa write your own
tests [0], since just running BVT is not enough to make sure that plugin
works, we should provide a way for a plugin developer to easily extend
checks/asserts which are done during BVT by just putting python files into
plugin-name/tests directory. This way will be able to install plugin
(enable it) and perform plugin specific checks.

Thanks,

[0] https://wiki.openstack.org/wiki/Fuel/Plugins#CI

On Wed, Feb 3, 2016 at 5:27 AM, Dmitry Borodaenko 
wrote:

> It has been over a year since pluggable architecture was introduced in
> Fuel 6.0, and I think it's safe to declare it an unmitigated success. A
> search for "fuel-plugin" on GitHub brings up 167 repositories [0],
> there's 63 Fuel plugin repositories on review.openstack.org [1], 25 Fuel
> plugins are listed in the DriverLog [2].
>
> [0] https://github.com/search?q=fuel-plugin-
> [1]
> https://review.openstack.org/#/admin/projects/?filter=openstack%252Ffuel-plugin-
> [2] http://stackalytics.com/report/driverlog?project_id=openstack%2Ffuel
>
> Even though the plugin engine is not yet complete (there still are
> things you can do in Fuel core that you cannot do in a plugin), dozens
> of deployers and developers [3] used it to expand Fuel capabilities
> beyond the limitations of our default reference architecture.
>
> [3] http://stackalytics.com/report/contribution/fuel-plugins-group/360
>
> There's a noticeable bump in contributions around October 2015 after
> Fuel 7.0 was released, most likely inspired by the plugin engine
> improvements introduced in that version [4]. As we continue to expand
> plugins capabilities, I expect more and more plugins to appear.
>
> [4]
> https://git.openstack.org/cgit/openstack/fuel-docs/tree/pages/release-notes/v7-0/new_features/plugins.rst?h=stable/7.0
>
> The question of how useful exactly all those plugins are is a bit harder
> to answer. DriverLog isn't much help: less than half of Fuel plugins
> hosted on OpenStack infrastructure are even registered there, and of
> those that are, only 6 have CI jobs with recent successful runs. Does
> this mean that 90% of Fuel plugins are broken and unmaintained? Not
> necessarily, but it does mean that we have no way to tell.
>
> An even harder question is: once we determine that some plugins are more
> equal than others, what should we do about the less useful and the less
> actively maintained?
>
> To objectively answer both questions, we need to define support levels
> for Fuel plugins and set some reasonable expectations about how plugins
> can qualify for each level.
>
> Level 3. Plugin is not actively supported
>
> I believe that having hundreds of Fuel plugins out there on GitHub and
> elsewhere is great, and we should encourage people to create more of
> those and do whatever they like with them. Even a single-commit "deploy
> and forget" plugin is useful as an idea, a source of inspiration, and a
> starting point for other people who might want to take it further.
>
> At this level, there should be zero expectations and zero obligations
> between Fuel plugin writers and OpenStack community. At the moment, Fuel
> plugins developers guide recommends [5] to request a Gerrit repo in the
> openstack/ namespace and set up branches, tags, CI, and a code review
> process around it, aligned with OpenStack development process. Which is
> generally a good idea, except for all the cases where it's too much
> overhead and ends up not being followed closely enough to be useful.
>
> [5] https://wiki.openstack.org/wiki/Fuel/Plugins#Repo
>
> Instead of vague blanket recommendations, we should explictly state that
> it's fine to do none of that and just stay on GitHub, and that if you
> intend to move to the next level and actively maintain your plugin, and
> expect support with that from Fuel developers and other OpenStack
> projects, these recommendations are not optional and must be fulfilled.
>
> Level 2. Plugin is actively supported by its registered maintainers
>
> To support a Fuel plugin, we need to answer two fundamental questions:
> Can we? Should we?
>
> I think the minimum requirements to say "yes" to both are:
>
> a) All of the plugin's source code is explicitly licensed under an
>OSI-approved license;
>
> b) The plugin source code repository does not contain binary artefacts
>such as RPM packages or ISO images (*);
>
> c) The plugin is registered in DriverLog;
>
> d) Plugin maintainers listed in DriverLog have confirmed the intent to
>support the plugin;
>
> e) Plugin repository on review.openstack.org has a voting CI job that is
>passing with the latest or, at least, previous major release of Fuel.
>
> f) All deviations from the OpenStack development process (alternative
>issue trackers, mailing lists, etc.) are documented in the plugin's
>README file.
>
> *  Aside from purely 

Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-16 Thread Evgeniy L
That is awesome, happy to finally see it enabled!

On Mon, Feb 15, 2016 at 9:32 PM, Anastasia Urlapova 
wrote:

> Aleksey, great news!
>
> On Mon, Feb 15, 2016 at 7:36 PM, Alexey Shtokolov  > wrote:
>
>> Fuelers,
>>
>> Task based deployment engine has been enabled in master (Fuel 9.0) by
>> default [0]
>>
>> [0] - https://review.openstack.org/#/c/273693/
>>
>> WBR, Alexey Shtokolov
>>
>> 2016-02-09 21:57 GMT+03:00 Vladimir Kuklin :
>>
>>> Folks
>>>
>>> It seems that docker removal spoilt our celebration a bit. Here is a bug
>>> link https://bugs.launchpad.net/fuel/+bug/1543720 . Fix is trivial, but
>>> will postpone swarm run for another day. Nevertheless, it seems to be the
>>> only issue affecting our ability to use TBD.
>>>
>>> Stay tuned!
>>>
>>> On Tue, Feb 9, 2016 at 2:26 PM, Igor Kalnitsky 
>>> wrote:
>>>
 > I've run BVT more than 100 times, it works,

 You run it some time ago. There were a lot of opportunities to
 introduce regression in both Nailgun and tasks of Fuel Library. ;)

 > We are going to run a swarm test today against the ISO with enabled
 task-based deployment

 So there will be a custom ISO, correct? If so, it works for me and
 I'll wait for its result.

 On Tue, Feb 9, 2016 at 1:17 PM, Alexey Shtokolov
  wrote:
 > Igor,
 >
 > We are going to run a swarm test today against the ISO with enabled
 > task-based deployment, than check results and merge changes tomorrow.
 > I've run BVT more than 100 times, it works, but I would like to check
 more
 > deployment cases.
 > And I guess it should be easy to troubleshoot if docker-related and
 > task-based related changes will be separated by a few days.
 >
 > 2016-02-09 13:39 GMT+03:00 Igor Kalnitsky :
 >>
 >> Well, I'm going to build a new ISO and run BVT. As soon as they are
 >> green, I'm going to approve the change.
 >>
 >> On Tue, Feb 9, 2016 at 12:32 PM, Bogdan Dobrelya <
 bdobre...@mirantis.com>
 >> wrote:
 >> > On 08.02.2016 17:05, Igor Kalnitsky wrote:
 >> >> Hey Fuelers,
 >> >>
 >> >> When we are going to enable it? I think since HCF is passed for
 >> >> stable/8.0, it's time to enable task-based deployment for master
 >> >> branch.
 >> >>
 >> >> Opinion?
 >> >
 >> > This must be done for the 9.0, IMHO.
 >> >
 >> >>
 >> >> - Igor
 >> >>
 >> >> On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya
 >> >>  wrote:
 >> >>> On 02.02.2016 17:35, Alexey Shtokolov wrote:
 >>  Hi Fuelers!
 >> 
 >>  As you may be aware, since [0] Fuel has implemented a new
 >>  orchestration
 >>  engine [1]
 >>  We switched the deployment paradigm from role-based (aka
 granular) to
 >>  task-based and now Fuel can deploy all nodes simultaneously
 using
 >>  cross-node dependencies between deployment tasks.
 >> >>>
 >> >>> That is great news! Please do not forget about docs updates as
 well.
 >> >>> Those docs are always forgotten like poor orphans... I submitted
 a
 >> >>> patch
 >> >>> [0] to MOS docs, please review and add more details, if
 possible, for
 >> >>> plugins impact as well.
 >> >>>
 >> >>> [0] https://review.fuel-infra.org/#/c/16509/
 >> >>>
 >> 
 >>  This feature is experimental in Fuel 8.0 and will be enabled by
 >>  default
 >>  for Fuel 9.0
 >> 
 >>  Allow me to show you the results. We made some benchmarks on
 our bare
 >>  metal lab [2]
 >> 
 >>  Case #1. 3 controllers + 7 computes w/ ceph.
 >>  Task-based deployment takes *~38* minutes vs *~1h15m* for
 granular
 >>  (*~2*
 >>  times faster)
 >>  Here and below the deployment time is average time for 10 runs
 >> 
 >>  Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
 >>  Task-based deployment takes *~41* minutes vs *~1h32m* for
 granular
 >>  (*~2.24* times faster)
 >> 
 >> 
 >> 
 >>  Also we took measurements for Fuel CI test cases. Standard BVT
 >>  (Master
 >>  node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs
 on one
 >>  host)
 >> 
 >>  Fuel CI slaves with *4 *cores *~1.1* times faster
 >>  In case of 4 cores for 7 VMs they are fighting for CPU
 resources and
 >>  it
 >>  marginalizes the gain of task-based deployment
 >> 
 >>  Fuel CI slaves with *6* cores *~1.6* times faster
 >> 
 >>  Fuel CI slaves with *12* cores *~1.7* times faster
 >> >>>
 >> >>> These are really outstanding results!
 >> >>> 

[openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Jordan Pittier
Hi list,
I understood we need to limit the number of tests and jobs that are run for
each Tempest patch because our resources are not unlimited.

In Tempest, we have 5 multinode experimental jobs:

experimental-tempest-dsvm-multinode-full-dibtest
gate-tempest-dsvm-multinode-full
gate-tempest-dsvm-multinode-live-migration
gate-tempest-dsvm-neutron-multinode-full
gate-tempest-dsvm-neutron-dvr-multinode-full

These jobs largely overlap with the non-multinode jobs. What about tagging
(with a python decorator) each test that really requires multiple nodes and
only run those tests as part of the multinode jobs ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Baremetal Deploy Ramdisk functional testing

2016-02-16 Thread Maksym Lobur
Thanks Jim for the feedback, see my comments below
> 
> So, as I understand this, the goal is to test the bareon ramdisk without
> bringing up the rest of the services that talk to it?

Correct.

> Currently in ironic-land, the only runtime testing of our ramdisk(s) is
> the integration tests we do with the rest of openstack (the dsvm tests).
> We've considered functional tests of the ironic-python-agent API, but
> haven't much considered writing scripts to boot it on a VM and do
> things.
> 
> Given the ways IPA interacts with Ironic, we'd basically just end up
> re-writing a bunch of Ironic code if we wanted to do this; however it
> might not be too much code, so maybe it's worth it? I'm not sure; I
> guess it isn't a major priority for us right now.

I know IPA uses a little bit more complex interaction than we are, but still 
this might be simpler than it seems. 
I can understand you about priorities. If you are focusing towards simplicity 
of the IPA there’s no much benefit from this kind of testing. But once you want 
to support more complex deployments, some implicit deployment optimizations 
inside the agent, it might add a value.

> All this is to say, I guess I'd have to look at the functional test
> framework you're building. I'm not opposed to making it more general,
> and as changing repo names is expensive (requires gerrit downtime), it
> might be worth naming it ramdisk-func-test or similar now just in case. :)
> 
Got it. We’ll change naming. If there’s a chance it is going to be reused, we 
don’t want to tie it to bareon.


>>> 
>>> [1] https://wiki.openstack.org/wiki/Bareon 
>>>  
>>> >> >
>>> [2] https://blueprints.launchpad.net/bareon/+spec/bareon-functional-testing 
>>> >>  >
>>> [3] http://pastebin.com/mL39QJS6  
>>> >
>>> [4] https://review.openstack.org/#/c/279120/ 
>>>  
>>> >> >
>>> 
>>> 
>>> Regards,
>>> Max Lobur,
>>> OpenStack Developer, Mirantis, Inc.
>>> 
>> 
> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sylvain Bauza



Le 16/02/2016 09:30, Sylvain Bauza a écrit :



Le 16/02/2016 04:09, Alex Xu a écrit :



2016-02-16 9:47 GMT+08:00 GHANSHYAM MANN >:


Regards
Ghanshyam Mann


On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu > wrote:
> If we support 2.x.y, when we bump 'x' is a problem. We didn't
order the API
> changes for now, the version of API change is just based on the
order of
> patch merge. For support 2.x.y, we need bump 'y' first for
back-compatible
> changes I guess.
>
> As I remember, we said before, the new feature is the
motivation of user
> upgrade their client to support new version API, whatever the
new version is
> backward compatible or incompatible. So I guess the initial
thinking we hope
> user always upgrade their code than always stop at old version?
If we bump
> 'x' after a lot of 'y', will that lead to user always stop at
'x' version?
> And the evolution of api will slow down.
>
> Or we limit to each release cycle. In each release, we bump 'y'
first, and
> then bump 'x'. Even there isn't any back-incompatible change in
the release.
> We still bump 'x' when released. Then we can encourage user
upgrade their
> code. But I still think the back-incompatible API change will
be slow down
> in development, as it need always merged after back-compatible
API change
> patches.

Yea that true and will be more complicated from development
perspective which leads to slow down the evolution of API changes.
But if we support x.y then still we can change x at any time back
in-comp changes happens(i mean before y also)? Or I may not be
getting
the issue you mentioned about always bump y before x.


If the back-incompatible change merged before back-compatible change, 
then 'y' become useless. For example, the initial version is 2.1.0, 
then we have 3 back-comp and 3 in-comp changes, and we are unlucky, 
in-comp changes merged first, then we get version 2.4.3, then if user 
want to use those back-comp changes, it still need upgrade those 3 
in-comp changes.



I like the idea of distinguish the backward comp and in-comp changes
with x and y which always gives clear perspective about changes.
But it should not lead users to ignore y. I mean some backward comp
changes which are really good gets ignored by users as they start
look
at the x only.
For example- "adding attribute in resource representation" is back
comp change (if so) and if that is added as y then, it might get
ignored by users.

Another way to clearly distinguish backward comp and in-comp changes
is through documentation which was initially discussed during
microversion specs. Currently doc has good description about each
changes but not much clear way about backward comp or not.
Which we can do by adding a clear flag [Backward Compatible/
Incompatible] for each version in doc [1]-


+1 for doc the change is backward comp or not.


I'm not usually good at thinking API references, but something pinged 
my brain so lemme know if that's terrible or not.

Why not semantically say that :
 - if the API microversion is a ten, then it's a non-backwards 
compatible change

 - if not, it's backwards-compatible

If you are like with the version #29 and add a new 
backwards-compatible version, then it would be #31 (and not #30).


That way, you would still have a monotonic increase, which I think was 
an agreement when discussing about microversioning, but it would help 
the users which would know the semantics and just look whether a ten 
is between the version they use and the version they want (and if so, 
if it was implemented).


Call me dumb, it's just a thought.
-Sylvain



One slight improvement could be to consider hundreds and not tens for 
major versions. That would leave 99 'minor' versions between majors, 
which I think is doable.


-S


>
>
>
> 2016-02-13 4:55 GMT+08:00 Andrew Laski >:
>>
>> Starting a new thread to continue a thought that came up in
>>
>>

http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
>> The Nova API microversion framework allows for backwards
compatible and
>> backwards incompatible changes but there is no way to
programmatically
>> distinguish the two. This means that as a user of the API I
need to
>> understand every change between the version I'm using now and
a new
>> version I would like to move to in case an intermediate
version changes
>> default behaviors or removes something I'm currently using.
>>
>> I would suggest that a more user friendly approach would be to
>> distinguish the two types of changes. Perhaps something like
2.x.y 

Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-16 Thread Sylvain Bauza



Le 16/02/2016 04:09, Alex Xu a écrit :



2016-02-16 9:47 GMT+08:00 GHANSHYAM MANN >:


Regards
Ghanshyam Mann


On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu > wrote:
> If we support 2.x.y, when we bump 'x' is a problem. We didn't
order the API
> changes for now, the version of API change is just based on the
order of
> patch merge. For support 2.x.y, we need bump 'y' first for
back-compatible
> changes I guess.
>
> As I remember, we said before, the new feature is the motivation
of user
> upgrade their client to support new version API, whatever the
new version is
> backward compatible or incompatible. So I guess the initial
thinking we hope
> user always upgrade their code than always stop at old version?
If we bump
> 'x' after a lot of 'y', will that lead to user always stop at
'x' version?
> And the evolution of api will slow down.
>
> Or we limit to each release cycle. In each release, we bump 'y'
first, and
> then bump 'x'. Even there isn't any back-incompatible change in
the release.
> We still bump 'x' when released. Then we can encourage user
upgrade their
> code. But I still think the back-incompatible API change will be
slow down
> in development, as it need always merged after back-compatible
API change
> patches.

Yea that true and will be more complicated from development
perspective which leads to slow down the evolution of API changes.
But if we support x.y then still we can change x at any time back
in-comp changes happens(i mean before y also)? Or I may not be getting
the issue you mentioned about always bump y before x.


If the back-incompatible change merged before back-compatible change, 
then 'y' become useless. For example, the initial version is 2.1.0, 
then we have 3 back-comp and 3 in-comp changes, and we are unlucky, 
in-comp changes merged first, then we get version 2.4.3, then if user 
want to use those back-comp changes, it still need upgrade those 3 
in-comp changes.



I like the idea of distinguish the backward comp and in-comp changes
with x and y which always gives clear perspective about changes.
But it should not lead users to ignore y. I mean some backward comp
changes which are really good gets ignored by users as they start look
at the x only.
For example- "adding attribute in resource representation" is back
comp change (if so) and if that is added as y then, it might get
ignored by users.

Another way to clearly distinguish backward comp and in-comp changes
is through documentation which was initially discussed during
microversion specs. Currently doc has good description about each
changes but not much clear way about backward comp or not.
Which we can do by adding a clear flag [Backward Compatible/
Incompatible] for each version in doc [1]-


+1 for doc the change is backward comp or not.


I'm not usually good at thinking API references, but something pinged my 
brain so lemme know if that's terrible or not.

Why not semantically say that :
 - if the API microversion is a ten, then it's a non-backwards 
compatible change

 - if not, it's backwards-compatible

If you are like with the version #29 and add a new backwards-compatible 
version, then it would be #31 (and not #30).


That way, you would still have a monotonic increase, which I think was 
an agreement when discussing about microversioning, but it would help 
the users which would know the semantics and just look whether a ten is 
between the version they use and the version they want (and if so, if it 
was implemented).


Call me dumb, it's just a thought.
-Sylvain


>
>
>
> 2016-02-13 4:55 GMT+08:00 Andrew Laski >:
>>
>> Starting a new thread to continue a thought that came up in
>>
>>

http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
>> The Nova API microversion framework allows for backwards
compatible and
>> backwards incompatible changes but there is no way to
programmatically
>> distinguish the two. This means that as a user of the API I need to
>> understand every change between the version I'm using now and a new
>> version I would like to move to in case an intermediate version
changes
>> default behaviors or removes something I'm currently using.
>>
>> I would suggest that a more user friendly approach would be to
>> distinguish the two types of changes. Perhaps something like
2.x.y where
>> x is bumped for a backwards incompatible change and y is still
>> monotonically increasing regardless of bumps to x. So if the
current
>> version is 2.2.7 a new backwards compatible change would bump
to 2.2.8
  

Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-16 Thread Chen CH Ji
+1 for no microversion, it's internal implementation and we should be free to remove it since we didn't document it anywhere-GHANSHYAM MANN  wrote: -To: "OpenStack Development Mailing List (not for usage questions)" From: GHANSHYAM MANN Date: 02/16/2016 03:49AMSubject: Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?Yes, currently Nova support that for show/update/delete server APIs etc (both v2 and v2.1) and python-novaclient too. But I think that was old behaviour and for ec2 API mainly?I searched on ec2 repo [1] and they get the instance from nova using UUID, i did not find any place they are fetching using id. Hut not sure if external interface directly fetch that on nova by 'id'.But apart from that, may be some users using 'id' instead of 'uuid' but that was not recommended or documented anywhere So in that case can we remove this old behaviour without version bump?[1].. https://github.com/openstack/ec2-apiRegardsGhanshyam MannOn Tue, Feb 16, 2016 at 11:24 AM, Anne Gentle  wrote:On Mon, Feb 15, 2016 at 6:03 PM, 少合冯  wrote:I guess others may ask the same questions. I read the nova API doc: such as this API: http://developer.openstack.org/api-ref-compute-v2.1.html#showServerGET /v2.1/​{tenant_id}​/servers/​{server_id}​Show server detailsRequest parametersParameterStyleTypeDescriptiontenant_idURIcsapi:UUIDThe UUID of the tenant in a multi-tenancy cloud.server_idURIcsapi:UUIDThe UUID of the server.But I can get the server by DB index: curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6 http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2{    "server": {        "OS-DCF:diskConfig": "MANUAL",        "OS-EXT-AZ:availability_zone": "nova",        "OS-EXT-SRV-ATTR:host": "shaohe1",        "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",        "OS-EXT-SRV-ATTR:instance_name": "instance-0002",        "OS-EXT-STS:power_state": 1,        "OS-EXT-STS:task_state": "migrating",        "OS-EXT-STS:vm_state": "error",        "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",        "OS-SRV-USG:terminated_at": null,        ..    }}and the code really allow it use  DB indexhttps://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939Nice find. Can you log this as an API bug and we'll triage it -- can even help you fix it on the site if you like. https://bugs.launchpad.net/openstack-api-site/+filebugBasically, click that link, write a short summary, then copy and paste in this email's contents, it has lots of good info.Let me know if you'd also like to fix the bug on the site.And hey nova team, if you think it's actually an API bug, we'll move it over to you.Thanks for reporting it!Anne __OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev-- Anne GentleRackspacePrincipal Engineerwww.justwriteclick.com__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev