Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-23 Thread Flavio Percoco
On 09/22/2014 09:25 PM, Doug Hellmann wrote:
 
 On Sep 22, 2014, at 12:19 PM, Gordon Sim g...@redhat.com wrote:
 
 On 09/22/2014 03:40 PM, Geoff O'Callaghan wrote:
 That being said I'm not sure why a well constructed zaqar with an rpc
 interface couldn't meet the requirements of oslo.messsaging and much more.

 What Zaqar is today and what it might become may of course be different 
 things but as it stands today, Zaqar relies on polling which in my opinion 
 is not a natural fit for RPC[1]. Though using an intermediary for 
 routing/addressing can be of benefit, store and forward is not necessary and 
 in my opinion even gets in the way[2].

 Notifications on the other hand can benefit from store and forward and may 
 be less latency sensitive, alleviating the polling concerns.

 One of the use cases I've heard cited for Zaqar is as an inbox for recording 
 certain sets of relevant events sent out by other open stack services. In my 
 opinion using oslo.messaging's notification API on the openstack service 
 side of this would seem - to me at least - quite sensible, even if the 
 events are then stored in (or forwarded to) Zaqar and accessed by users 
 through Zaqar's own protocol.
 
 I agree that the notification features of oslo.messaging are more likely to 
 be useful through Zaqar than the RPC API. Our internal notifications may 
 include information we wouldn’t want to leak outside of a cloud, but a 
 notification driver for oslo.messaging that talked to Zaqar and took into 
 account tenant-based addressing in some way might make a lot of sense.
 

I won't get into much detail on this right now but I agree and it was
discussed already - at the Juno summit, I believe.

Thanks,
Flavio

 Doug
 

 

 [1] The latency of an RPC call as perceived by the client is going to depend 
 heavily on the polling frequency; to get lower latency, you'll need to pool 
 more frequently both on the server and on the client. However polling more 
 frequently results in increased load even when no requests are being made.

 [2] I am of the view that reliable RPC is best handled by replaying the 
 request from the client when needed, rather than trying to make the request 
 and reply messages durably recorded, replicated and reliably delivered. 
 Doing so is more scalable and simpler. An end-to-end acknowledgement for the 
 request (rather than a broker taking responsibility and acknowledging the 
 request independent of delivery status) makes it easier to detect failures 
 and trigger a resend.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] To slide or not to slide? OpenStack Bootstrapping Hour...

2014-09-23 Thread Gary Kotton
Hi,
I think that it was very informative and useful. I have forwarded it to a
number of new people starting to work on OpenStack and who were wondering
how to use mockŠ So two birds with one stone.
Thanks!
Gary

On 9/22/14, 7:27 PM, Jay Pipes jaypi...@gmail.com wrote:

Hi all,

OK, so we had our inaugural OpenStack Bootstrapping Hour last Friday.
Thanks to Sean and Dan for putting up with my rambling about unittest
and mock stuff. And thanks to one of my pugs, Winnie, for, according to
Shrews, looking like she was drunk. :)

One thing we're doing today is a bit of a post-mortem around what worked
and what didn't.

For this first OBH session, I put together some slides [1] that were
referenced during the hangout session. I'm wondering what folks who
watched the video [2] thought about using the slides.

Were the slides useful compared to just providing links to code on the
etherpad [3]?

Were the slides a hindrance to the flow of the OBH session?

Would a production of slides be better to do *after* an OBH session,
instead of before?

Very much interested in hearing answers/opinions about the above
questions. We're keen to provide the best experience possible, and are
open to any ideas.

Best,
-jay

[1] 
https://urldefense.proofpoint.com/v1/url?u=http://bit.ly/obh-mock-best-pra
cticesk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDty
sg45MkPhCZFxPEq8%3D%0Am=mRBZtkhZN%2FYXcozwcqLrrfLMrE2ItK9T2%2FjmB72dQiY%3
D%0As=9001183cba07f4c3dfeaba68c6f48b006fcf8974432f53b94a6ee424511a65c5
[2] 
https://urldefense.proofpoint.com/v1/url?u=http://youtu.be/jCWtLoSEfmwk=o
IvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZ
FxPEq8%3D%0Am=mRBZtkhZN%2FYXcozwcqLrrfLMrE2ItK9T2%2FjmB72dQiY%3D%0As=843
cf6d8b5dd4f38e3c9b452744928093eafe10ccd6b0dca7419915484229bd2
[3] https://etherpad.openstack.org/p/obh-mock-best-practices

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Thierry Carrez
Hi everyone,

Rather than waste a design summit session about it, I propose we review
the proposed Kilo release cycle schedule on the mailing-list. See
attached PDF for a picture of the proposal.

The hard date the proposal is built around is the next (L) Design
Summit week: May 18-22. That pushes back far into May (the farthest
ever). However the M design summit being very likely to come back in
October, the release date was set to April 23 to smooth the difference.

That makes 3 full weeks between release and design summit (like in
Hong-Kong), allowing for an official off-week on the week of May 4-8.

The rest of the proposal is mostly a no-brainer. Like always, we allow a
longer time for milestone 2, to take into account the end-of-year
holidays. That gives:

Kilo Design Summit: Nov 4-7
Kilo-1 milestone: Dec 11
Kilo-2 milestone: Jan 29
Kilo-3 milestone, feature freeze: March 12
2015.1 (Kilo) release: Apr 23
L Design Summit: May 18-22

All milestones avoid known US holiday weekends. Let me know if I missed
something or if you see major issues with this proposal.

Regards,

-- 
Thierry Carrez (ttx)



kilo.pdf
Description: Adobe PDF document
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt boot parameters

2014-09-23 Thread Angelo Matarazzo

Hi,
I think that this variable is useful for me
cfg.IntOpt(reboot_timeout,
 default=0,
 help=Automatically hard reboot an instance if it has been 
  stuck in a rebooting state longer than N seconds. 
  Set to 0 to disable.),
  cfg.IntOpt(instance_build_timeout,

but it seems that Eli is right: lbvirt driver doesn't implement this 
function


def poll_rebooting_instances(self, timeout, instances):
pass



BTW I'm developing a feature: booting from network.
In my scenario sometimes the the booting instance pauses because the 
boot source ( PXE) cannot answer on time.

Look at
6.4. Installing guest virtual machines with PXE

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6-Beta/html-single/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html

I'll investigate on reboot_timeout and I'll inform you but thanks for 
the tips


BR
Angelo




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Could we please use 1.4.1 for oslo.messaging *now*? (was: Oslo final releases ready)

2014-09-23 Thread Thomas Goirand
On 09/18/2014 10:04 PM, Doug Hellmann wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.
 
 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)
 
 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!
 
 Doug

Doug,

Here in Debian, I have a *huge* mess with versionning with oslo.messaging.

tl;dr: Because of that version number mess, please add a tag 1.4.1 to
oslo.messaging now and use it everywhere instead of 1.4.0.

Longer version:

What happened is that Chuck released a wrong version of Keystone (eg:
the trunk rather than the stable branch). Therefore, I uploaded a
version 1.4.0 beta version of olso.messaging in Debian Unstable/Jessie,
because I thought the Icehouse version of Keystone needed it. (Sid /
Jessie is supposed to keep Icehouse stuff only.)

That would have been about fine, if only I didn't upgraded
oslo.messaging to the last version in Sid, because I didn't want to keep
a beta release in Jessie. Though this last version depends on
oslo.config 1.4.0.0~a5, then probably even more.

So I reverted the 1.4.0.0 upload in Debian Sid, by uploading version
1.4.0.0+really+1.3.1, which as its name may suggest, really is a 1.3.1
version (I did that to avoid having an EPOC and need to re-upload
updates of all reverse dependencies of oslo.messaging). That's fine,
we're covered for Sid/Jessie.

But then, the Debian Experimental version of oslo.messaging is lower
than the one in Sid/Jessie, so I have breakage there.

If we declare a new 1.4.1, and have this fixed in our
global-requirements.txt, then everything goes back in order for me, I
get back on my feets. Otherwise, I'll have to deal with this, and make
fake version numbers which will not match anything real released by
OpenStack, which may lead to even more mistakes.

So, could you please at least:
- add a git tag 1.4.1 to oslo.messaging right now, matching 1.4.0

This will make sure that nobody will use 1.4.1 again, and that I'm fine
using this version number in Debian Experimental, which will be higher
than the one in Sid.

And then, optionally, it would help me if you could (but I can leave
without it):
- Use 1.4.1 for oslo.messaging in global-requirements.txt
- Have every project that needs 1.4.0 bump to 1.4.1 as well

This would be a lot less work than for me to declare an EPOC in the
oslo.messaging package, and fix all reverse dependencies. The affected
packages for Juno for me are:
- ceilometer
- cinder
- designate
- glance
- heat
- ironic
- keystone
- neutron
- nova
- oslo-config
- oslo.rootwrap
- oslo.i18n
- python-pycadf

I'd have to upload updates for all of them even if we use 1.4.1 instead
of using an EPOC (eg: 1:1.4.0), but that's still much better for me to
use 1.4.1 than an EPOC. EPOC are ugly (because not visible in file
names) and confusing (it's easy to forget them), and non-reversible, so
I'd like to avoid it if possible.

I'm sorry for the mess and added work.
Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-23 Thread Thierry Carrez
Adam Young wrote:
 OpenStack owes you more than most people realize.

+1

Dolph did a great job of keeping the fundamental piece that is Keystone
safe from a release management perspective, by consistently hitting all
the deadlines, giving time for other projects to safely build on it.

 Don't you dare pull a Joe Heck and disappear on us now.

:)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecating localfs?

2014-09-23 Thread Richard W.M. Jones
On Tue, Sep 23, 2014 at 09:53:36AM +1000, Michael Still wrote:
 Hi.
 
 I know we've been talking about deprecating nova.virt.disk.vfs.localfs
 for a long time, in favour of wanting people to use libguestfs
 instead. However, I can't immediately find any written documentation
 for when we said we'd do that thing.
 
 Additionally, this came to my attention because Ubuntu 14.04 is
 apparently shipping a libguestfs old enough to cause us to emit the
 falling back to localfs warning, so I think we need Ubuntu to catch
 up before we can do this thing.
 
 So -- how about we remove localfs early in Kilo to give Canonical a
 release to update libguestfs?

A few randomly related points:

- libguestfs 1.26 in Debian (and eventually in Ubuntu) finally
  gets rid of 'update-guestfs-appliance'. 

- Unfortunately Ubuntu still has the kernel permissions bug:

  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725

  Fedora/RHEL has none of these issues.

- There are a couple of easy to fix bugs that would greatly improve
  libguestfs usability in OpenStack:

  (1) Don't throw away debugging information:
  https://bugs.launchpad.net/nova/+bug/1279857

  (2) [Don't think there's a bug# for this] The
  libvirt_inject_partition parameter doesn't adequately model what
  libguestfs can do for guests.  Plus it's a global setting and ought
  to be a glance setting (or per disk/per template anyway).
  libguestfs has a rich API for inspecting guests, and that cannot be
  modelled in a single integer.
  http://libguestfs.org/guestfs.3.html#inspection

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Gordon Sim

On 09/22/2014 05:58 PM, Zane Bitter wrote:

On 22/09/14 10:11, Gordon Sim wrote:

As I understand it, pools don't help scaling a given queue since all the
messages for that queue must be in the same pool. At present traffic
through different Zaqar queues are essentially entirely orthogonal
streams. Pooling can help scale the number of such orthogonal streams,
but to be honest, that's the easier part of the problem.


But I think it's also the important part of the problem. When I talk
about scaling, I mean 1 million clients sending 10 messages per second
each, not 10 clients sending 1 million messages per second each.


I wasn't really talking about high throughput per producer (which I 
agree is not going to be a good fit), but about e.g. a large number of 
subscribers for the same set of messages, e.g. publishing one message 
per second to 10,000 subscribers.


Even at much smaller scale, expanding from 10 subscribers to say 100 
seems relatively modest but the subscriber related load would increase 
by a factor of 10. I think handling these sorts of changes is also an 
important part of the problem (though perhaps not a part that Zaqar is 
focused on).



When a user gets to the point that individual queues have massive
throughput, it's unlikely that a one-size-fits-all cloud offering like
Zaqar or SQS is _ever_ going to meet their needs. Those users will want
to spin up and configure their own messaging systems on Nova servers,
and at that kind of size they'll be able to afford to. (In fact, they
may not be able to afford _not_ to, assuming per-message-based pricing.)


[...]

If scaling the number of communicants on a given communication channel
is a goal however, then strict ordering may hamper that. If it does, it
seems to me that this is not just a policy tweak on the underlying
datastore to choose the desired balance between ordering and scale, but
a more fundamental question on the internal structure of the queue
implementation built on top of the datastore.


I agree with your analysis, but I don't think this should be a goal.


I think it's worth clarifying that alongside the goals since scaling can 
mean different things to different people. The implication then is that 
there is some limit in the number of producers and/or consumers on a 
queue beyond which the service won't scale and applications need to 
design around that.



Note that the user can still implement this themselves using
application-level sharding - if you know that in-order delivery is not
important to you, then randomly assign clients to a queue and then poll
all of the queues in the round-robin. This yields _exactly_ the same
semantics as SQS.


You can certainly leave the problem of scaling in this dimension to the 
application itself by having them split the traffic into orthogonal 
streams or hooking up orthogonal streams to provide an aggregated stream.


A true distributed queue isn't entirely trivial, but it may well be that 
most applications can get by with a much simpler approximation.


Distributed (pub-sub) topic semantics are easier to implement, but if 
the application is responsible for keeping the partitions connected, 
then it also takes on part of the burden for availability and redundancy.



The reverse is true of SQS - if you want FIFO then you have to implement
re-ordering by sequence number in your application. (I'm not certain,
but it also sounds very much like this situation is ripe for losing
messages when your client dies.)

So the question is: in which use case do we want to push additional
complexity into the application? The case where there are truly massive
volumes of messages flowing to a single point?  Or the case where the
application wants the messages in order?


I think the first case is more generally about increasing the number of 
communicating parties (publishers or subscribers or both).


For competing consumers ordering isn't usually a concern since you are 
processing in parallel anyway (if it is important you need some notion 
of message grouping within which order is preserved and some stickiness 
between group and consumer).


For multiple non-competing consumers the choice needn't be as simple as 
total ordering or no ordering at all. Many systems quite naturally only 
define partial ordering which can be guaranteed more scalably.


That's not to deny that there are indeed cases where total ordering may 
be required however.



I'd suggest both that the former applications are better able to handle
that extra complexity and that the latter applications are probably more
common. So it seems that the Zaqar team made a good decision.


If that was a deliberate decision it would be worth clarifying in the 
goals. It seems to be a different conclusion from that reached by SQS 
and as such is part of the answer to the question that began the thread.



(Aside: it follows that Zaqar probably should have a maximum throughput
quota for each queue; or that it should report usage 

[openstack-dev] [requirements][horizon] Dependency freeze exceptions: django-openstack-auth

2014-09-23 Thread Akihiro Motoki
Hi,

I would like to request dependency freeze exceptions for django-openstack-auth.
https://review.openstack.org/#/c/123101/

django-openstack-auth is developped along with Horizon and the recent
release 1.1.7 of django-openstack-auth has good fixes which affect
Juno Horizon, especially from the following fixes.

- https://bugs.launchpad.net/horizon/+bug/1308918
  django-openstack-auth fix is required to fix the logout bug in
Horizon (targeted to Juno-RC1). The bug is painful for users as they
are forced to log-in twice.
- https://bugs.launchpad.net/django-openstack-auth/+bug/1324948
  It fixes a bug where keystone urls where rewritten if v3 or v2.0
was found in the hostname.

Thanks,
Akihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] PTL Candidacy

2014-09-23 Thread Eoghan Glynn

Folks,

I'd like to continue serving as Telemetry PTL for a second cycle.

When I took on the role for Juno, I saw some challenges facing the
project that would take multi-cycle efforts to resolve, so I'd like to
have the opportunity to see that move closer to completion.

Over Juno, our focus as a project has necessarily been on addressing
the TC gap analysis. We've been successful in ensuring that the agreed
gap coverage tasks were completed. The team made great strides in
making the sql-alchemy driver a viable option for PoCs and small
deployments, getting meaningful Tempest  Grenade coverage in place,
and writing quality user- and operator-oriented documentation. This
has addressed a portion of our usability debt, but as always we need
to continue chipping away at that.

In parallel, an arms-length effort was kicked off to look at paying
down accumulated architectural debt in Ceilometer via a new approach
to more lightweight timeseries data storage via the Gnocchi project.
This was approached in such a way as to minimize the disruption to
the core project.

My vision for Kilo would be to shift our focus a bit more onto such
longer-terms strategic efforts. Clearly we need to complete the work
on Gnocchi and figure out the migration and co-existence issues.

In addition, we started a conversation with the Monasca folks at the
Juno summit on the commonality between the two projects. Over Kilo I
would like to broaden and deepen the collaboration that was first
mooted in Atlanta, by figuring out specific incremental steps around
converging some common functional areas such as alarming. We can also
learn from the experience of the Monasca project in getting the best
possible performance out of TSD storage in InfluxDB, or achieving very
high throughput messaging via Apache Kafka.

There are also cross-project debts facing our community that we need
to bring some of our focus to IME. In particular, I'm thinking here
about the move towards taking integration test coverage back out of
Tempest and into new project-specific functional test suites. Also the
oft-proposed, but never yet delivered-upon, notion of contractizing
cross-project interactions mediated by notifications.

Finally, it's worth noting that our entire community has a big
challenge ahead of it in terms of the proposed move towards a new
layering structure. If re-elected, I would see myself as an active
participant in that discussion, ensuring the interests of the project
are positively represented.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Christopher Yeoh
On Mon, 22 Sep 2014 09:29:26 +
Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:
 
 Before discussing how to implement, I'd like to consider what we
 should implement. IIUC, the purpose of v3 API is to make consistent
 API with the backwards incompatible changes. Through huge discussion
 in Juno cycle, we knew that backwards incompatible changes of REST
 API would be huge pain against clients and we should avoid such
 changes as possible. If new APIs which are consistent in Nova API
 only are inconsistent for whole OpenStack projects, maybe we need to
 change them again for whole OpenStack consistency.

So I think there's three different aspects to microversions which we
can consider quit separately:

- The format of the client header and what the version number means.
  Eg is the version number of the format X.Y.Z, what do we increment
  when we make a bug fix, what do we increment when we make a backwards
  compatible change and what do we increment when we make backwards
  incompatible change.

  Also how does a client request experimental APIs (I believe we have
  consensus that we really need this to avoid backwards incompatible
  changes as much as possible as it allows more testing before
  guaranteeing backwards compatibility)

  I believe that we can consider this part separately from the next two
  issues.

- The implementation on the nova api side. Eg how do we cleanly handle
  supporting multiple versions of the api based on the client header
  (or lack of it which will indicate v2 compatibility. I'll respond
  directly on Alex's original post

- What we are going to use the microversions API feature to do. I think
  they fall under a few broad categories: 

  - Backwards compatible changes. We desperately need a mechanism that
allows us to make backwards compatible changes (eg add another
parameter to a response) without having to add another dummy
extension.

  - Significant backwards incompatible changes. The Tasks API and server
diagnostics API are probably the best examples of this. 

  - V3 like backwards incompatible changes (consistency fixes).

I think getting consensus over backwards compatible changes will be
straightforward. However given the previous v2/v3 discussions I don't
think we will be able to get consensus over doing all or most of the
consistency type fixes even using microversions in the short term.
Because with microversions you get all the changes applied before the
version that you choose. So from a client application point of view its
just as much work as V2 to V3 API transition.

I don't think that means we need to put all of these consistency
changes off forever though. We need to make backwards incompatible
changes in order to implement the Tasks API  and new server
diagnostics api the way we want to. The Tasks API will eventually cover
quite a few interfaces and while say breaking backwards compatibility
with the create server api, we can also fix consistency issues in that
api at the same time. Clients will need to make changes to their app
anyway if they want to take advantage of the new features (or they can
just continue to use the old non-tasks enabled API).

So as we slowly make backwards incompatible changes to the API for
other reasons we can also fix up other issues. Other consistency fixes
we can propose on a case by case basis and the user community can have
input as to whether the cost (app rework) is worth it without getting a
new feature at the same time.

But I think its clear that we *need* the microversions mechanism. So we
don't need to decide beforehand exactly what we're going to use it for
first. I think think its more important that we get a nova-spec
approved for the the first two parts - what it looks like from the
client point of view. And how we're going to implement it.

Regards,

Chris

 
 For avoiding such situation, I think we need to define what is
 consistent REST API across projects. According to Alex's blog, The
 topics might be
 
  - Input/Output attribute names
  - Resource names
  - Status code
 
 The following are hints for making consistent APIs from Nova v3 API
 experience, I'd like to know whether they are the best for API
 consistency.
 
 (1) Input/Output attribute names
 (1.1) These names should be snake_case. 
   eg: imageRef - image_ref, flavorRef - flavor_ref, hostId -
 host_id (1.2) These names should contain extension names if they are
 provided in case of some extension loading. eg: security_groups -
 os-security-groups:security_groups config_drive -
 os-config-drive:config_drive (1.3) Extension names should consist of
 hyphens and low chars. eg: OS-EXT-AZ:availability_zone -
 os-extended-availability-zone:availability_zone OS-EXT-STS:task_state
 - os-extended-status:task_state (1.4) Extension names should contain
 the prefix os- if the extension is not core. eg: rxtx_factor -
 os-flavor-rxtx:rxtx_factor os-flavor-access:is_public -
 flavor-access:is_public (flavor-access extension became core) 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Thierry Carrez
Devananda van der Veen wrote:
 On Mon, Sep 22, 2014 at 2:27 PM, Doug Hellmann d...@doughellmann.com wrote:
 On Sep 22, 2014, at 5:10 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 One of the primary effects of integration, as far as the release
 process is concerned, is being allowed to co-gate with other
 integrated projects, and having those projects accept your changes
 (integrate back with the other project). That shouldn't be a TC

 The point of integration is to add the projects to the integrated *release*, 
 not just the gate, because the release is the thing we have said is 
 OpenStack. Integration was about our overall project identity and 
 governance. The testing was a requirement to be accepted, not a goal.
 
 We have plenty of things which are clearly part of OpenStack, and yet
 which are not part of the Integrated Release. Oslo. Devstack. Zuul...
 As far as I can tell, the only time when integrated release equals
 the thing we say is OpenStack is when we're talking about the
 trademark.

The main goal of incubation, as we did it in the past cycles, is a
learning period where the new project aligns enough with the existing
ones so that it integrates with them (Horizon shows Sahara dashboard)
and won't break them around release time (stability, co-gate, respect of
release deadlines).

If we have a strict set of projects in layer #1, I don't see the point
of keeping incubation. We wouldn't add new projects to layer #1 (only
project splits which do not really require incubations), and additions
to the big tent are considered on social alignment only (are you
vaguely about cloud and do you follow the OpenStack way). If there is
nothing to graduate to, there is no need for incubation.

 Integration was about our overall project identity and governance. The 
 testing was a requirement to be accepted, not a goal.
 
 Project identity and governance are presently addressed by the
 creation of Programs and a fully-elected TC.  Integration is not
 addressing these things at all, as far as I can tell, though I agree
 that it was initially intended to.
 
 If there is no incubation process, and only a fixed list of projects will be 
 in that new layer 1 group, then do contributors to the other projects have 
 ATC status and vote for the TC? What is the basis for the TC accepting any 
 responsibility for the project, and for the project agreeing to the TC’s 
 leadership?
 
 I think a good basis for this is simply whether the developers of the
 project are part of our community, doing things in the way that we do
 things, and want this to happen. Voting and ATC status is already
 decoupled [0] from the integrated gate and the integrated release --
 it's based on the accepted list of Programs [1], which actually has
 nothing to do with incubation or integration [2].

In Monty's proposal, ATC status would be linked to contributions to the
big tent. Projects apply to become part of it, subject themselves to the
oversight of the Technical Committee, and get the right to elect TC
members in return.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Thierry Carrez
Robert Collins wrote:
 On 19 September 2014 22:29, Thierry Carrez thie...@openstack.org wrote:
 ...
 current Heat team is not really interested in maintaining them. What's
 the point of being under the same program then ? And TripleO is not the
 only way to deploy OpenStack, but its mere existence (and name)
 prevented other flowers to bloom in our community.
 
 So I've a small issue here - *after* TripleO became an official
 program Tuskar was started - and we embraced and shuffled things to
 collaborate. There's a tripleo puppet repository, an ansible one
 coming soon I believe, and I'm hearing rumours of a spike using
 another thing altogether (whose thunder I don't want to steal so I'm
 going to be vague :)). We've collaborated to a degree with Fuel and
 Crowbar: I am not at all sure we've prevented other flowers blooming -
 and I hate the idea that we have done that.

I agree that you went out of your way to be inclusive, but having a
single PTL for a group of competing alternatives is a bit weird. As far
as preventing blooming, that's something we've heard from the Board of
Directors (as part of an objection to calling the TripleO program
OpenStack Deployment).

 ...
 ## The release and the development cycle

 You touch briefly on the consequences of your model for the common
 release and our development cycle. Obviously we would release the ring
 0 projects in an integrated manner on a time-based schedule.
 
 I'm not at all sure we need to do that - I've long been suspicious of
 the coordinated release. I see benefits to users in being able to grab
 a new set of projects all at once, but they can do that irrespective
 of our release process, as long as:
 
 *) we do releases
 *) we do at least one per project per 6 month period
 
 Tying all our things together makes for hugely destablising waves of
 changes and rushes: if we aimed at really smooth velocity and frequent
 independent releases I think we could do a lot better: contracts and
 interfaces are *useful* things for large scale architectures, and we
 need to stop kidding ourselves - OpenStack is not a lean little
 beastie anymore: its a large scale distributed system. Swift is doing
 exactly the right thing today - I'd like to see more of that.

Note that I'm only advocating that Monty's layer #1 things would be
released in an integrated manner on a time-based schedule. That's not
all our things. All the other things would be released as-needed. That
lets us focus on cleaning up interfaces between layer #1 and other
things first. Swift is doing the right thing today because its
loosely-integrated with layer #1 (and has a clean interface for doing
so). So Swift is doing the right thing for an non-layer#1 project.

Once our layer#1-internal interfaces are simple/stable enough that we
can safely compose something that works at any time by picking the
latest version of everything, we can discuss the benefits and the
drawbacks of the common release model. I still think there are huge
community benefits to sharing the same cycle, and gathering around a
common release that can be advertised and consumed downstream... as
far as layer #1 is concerned.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-23 Thread Steven Hardy
On Fri, Sep 19, 2014 at 06:54:27PM -0400, Zane Bitter wrote:
 On 09/09/14 05:52, Steven Hardy wrote:
 Hi Sahdev,
 
 On Tue, Sep 02, 2014 at 11:52:30AM -0400, Sahdev P Zala wrote:
 Hello guys,
 
 As you know, the heat-translator project was started early this year 
  with
 an aim to create a tool to translate non-Heat templates to HOT. It is a
 StackForge project licensed under Apache 2. We have made good progress
 with its development and a demo was given at the OpenStack 2014 Atlanta
 summit during a half-a-day session that was dedicated to heat-translator
 project and related TOSCA discussion. Currently the development and
 testing is done with the TOSCA template format but the tool is designed 
  to
 be generic enough to work with templates other than TOSCA. There are 
  five
 developers actively contributing to the development. In addition, all
 current Heat core members are already core members of the 
  heat-translator
 project.
 
 Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and
 updated the attendees on heat-translator project and ongoing progress. I
 also requested everyone for a formal adoption of the project in the
 python-heatclient and the consensus was that it is the right thing to 
  do.
 Also when the project was started, the initial plan was to make it
 available in python-heatclient. Hereby, the heat-translator team would
 like to make a request to have the heat-translator project to be adopted
 by the python-heatclient/Heat program.
 
 Obviously I wasn't at the meetup, so I may be missing some context here,
 but can you answer some questions please?
 
 - Is the scope for heat-translator only tosca simple-profile, or also the
original more heavyweight tosca too?
 
 - If it's only tosca simple-profile, has any thought been given to moving
towards implementing support via a template parser plugin, rather than
baking the translation into the client?
 
 One idea we discussed at the meetup was to use the template-building code
 that we now have in Heat for building the HOT output from the translator -
 e.g. the translator would produce ResourceDefinition objects and add them to
 a HOTemplate object.
 
 That would actually get us a long way toward an implementation of a template
 format plugin (which basically just has to spit out ResourceDefinition
 objects). So maybe that approach would allow us to start in
 python-heatclient and easily move it later into being a full-fledged
 template format plugin in Heat itself.
 
 While I see this effort as valuable, integrating the translator into the
 client seems the worst of all worlds to me:
 
 - Any users/services not intefacing to heat via python-heatclient can't use 
 it
 
 Yep, this is a big downside (although presumably we'd want to build in a way
 to just spit out the generated template that can be used by other clients).
 
 On the other hand, there is a big downside to having it (only) in Heat also
 - you're dependent on the operator deciding to provide it.
 
 - You prempt the decision about integration with any higher level services,
e.g Mistral, Murano, Solum, if you bake in the translator at the
heat level.
 
 Not sure I understand this one.

I meant if non-simple TOSCA was in scope, would it make sense to bake the
translation in at the heat level, when there are aspects of the DSL which
we will never support (but some higher layer might).

Given Sahdev's response saying simple-profile is all that is currently in
scope, it's probably a non-issue, I just wanted to clarify if heat was the
right place for this translation.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Nikesh Kumar Mahalka
Hi,
I am able to do all volume operations through dashboard and cli commands.
But when i am running tempest tests,some tests are getting failed.
For contributing cinder volume driver for my client in cinder,do all
tempest tests should passed?

Ex:
1)
./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
are getting failed

But when i am running individual tests in test_volumes_snapshots,all
tests are getting passed.

2)
./run_tempest.sh
tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
This is also getting failed.



Regards
Nikesh

On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  Sent: Saturday, September 20, 2014 9:49 PM
  To: openst...@lists.openstack.org; OpenStack Development Mailing List
 (not for usage questions)
  Subject: Re: [Openstack] No one replying on tempest issue?Please share
 your experience
 
  Still i didnot get any reply.

 Jay has already replied to this mail, please check the nova-compute
 and cinder-volume log as he said[1].

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

  Now i ran below command:
  ./run_tempest.sh
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot
 
  and i am getting test failed.
 
 
  Actually,after analyzing tempest.log,i found that:
  during creation of a volume from snapshot,tearDownClass is called and it
 is deleting snapshot bfore creation of volume
  and my test is getting failed.

 I guess the failure you mentioned at the above is:

 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
 [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
 0.029s

 and

 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
 [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
 0.034s

 right?
 If so, that is not a problem.
 VolumesSnapshotTest creates two volumes, and the tearDownClass checks these
 volumes deletions by getting volume status until 404(NotFound) [2].

 [2]:
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128

  I deployed a juno devstack setup for a cinder volume driver.
  I changed cinder.conf file and tempest.conf file for single backend and
 restarted cinder services.
  Now i ran tempest test as below:
  /opt/stack/tempest/run_tempest.sh
 tempest.api.volume.test_volumes_snapshots
 
  I am getting below output:
   Traceback (most recent call last):
File
 /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py, line
 176, in test_volume_from_snapshot
  snapshot = self.create_snapshot(self.volume_origin['id'])
File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in
 create_snapshot
  'available')
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 126, in wait_for_snapshot_status
  value = self._get_snapshot_status(snapshot_id)
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 99, in _get_snapshot_status
  snapshot_id=snapshot_id)
  SnapshotBuildErrorException: Snapshot
 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in
  ERROR status

 What happens if running the same operation as Tempest by hands on your
 environment like the following ?

 [1] $ cinder create 1
 [2] $ cinder snapshot-create id of the created volume at [1]
 [3] $ cinder create --snapshot-id id of the created snapshot at [2] 1
 [4] $ cinder show id of the created volume at [3]

 Please check whether the status of created volume at [3] is available or
 not.

 Thanks
 Ken'ichi Ohmichi

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Flavio Percoco
On 09/23/2014 05:13 AM, Clint Byrum wrote:
 Excerpts from Joe Gordon's message of 2014-09-22 19:04:03 -0700:

[snip]


 To me this is less about valid or invalid choices. The Zaqar team is
 comparing Zaqar to SQS, but after digging into the two of them, zaqar
 barely looks like SQS. Zaqar doesn't guarantee what IMHO is the most
 important parts of SQS: the message will be delivered and will never be
 lost by SQS. Zaqar doesn't have the same scaling properties as SQS. Zaqar
 is aiming for low latency per message, SQS doesn't appear to be. So if
 Zaqar isn't SQS what is Zaqar and why should I use it?

 
 I have to agree. I'd like to see a simple, non-ordered, high latency,
 high scale messaging service that can be used cheaply by cloud operators
 and users. What I see instead is a very powerful, ordered, low latency,
 medium scale messaging service that will likely cost a lot to scale out
 to the thousands of users level.

I don't fully agree :D

Let me break the above down into several points:

* Zaqar team is comparing Zaqar to SQS: True, we're comparing to the
*type* of service SQS is but not *all* the guarantees it gives. We're
not working on an exact copy of the service but on a service capable of
addressing the same use cases.

* Zaqar is not guaranteeing reliability: This is not true. Yes, the
current default write concern for the mongodb driver is `acknowledge`
but that's a bug, not a feature [0] ;)

* Zaqar doesn't have the same scaling properties as SQS: What are SQS
scaling properties? We know they have a big user base, we know they have
lots of connections, queues and what not but we don't have numbers to
compare ourselves with.

* Zaqar is aiming for low latency per message: This is not true and I'd
be curious to know where did this come from. A couple of things to consider:

- First and foremost, low latency is a very relative measure  and it
depends on each use-case.
- The benchmarks Kurt did were purely informative. I believe it's good
to do them every once in a while but this doesn't mean the team is
mainly focused on that.
- Not being focused on 'low-latency' does not mean the team will
overlook performance.

* Zaqar has FIFO and SQS doesn't: FIFO won't hurt *your use-case* if
ordering is not a requirement but the lack of it does when ordering is a
must.

* Scaling out Zaqar will cost a lot: In terms of what? I'm pretty sure
it's not for free but I'd like to understand better this point and
figure out a way to improve it, if possible.

* If Zaqar isn't SQS then what is it? Why should I use it?: I don't
believe Zaqar is SQS as I don't believe nova is EC2. Do they share
similar features and provide similar services? Yes, does that mean you
can address similar use cases, hence a similar users? Yes.

In addition to the above, I believe Zaqar is a simple service, easy to
install and to interact with. From a user perspective the semantics are
few and the concepts are neither new nor difficult to grasp. From an
operators perspective, I don't believe it adds tons of complexity. It
does require the operator to deploy a replicated storage environment but
I believe all services require that.

Cheers,
Flavio

P.S: Sorry for my late answer or lack of it. I lost *all* my emails
yesterday and I'm working on recovering them.

[0] https://bugs.launchpad.net/zaqar/+bug/1372335

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon] Dependency freeze exceptions: django-openstack-auth

2014-09-23 Thread Thierry Carrez
Akihiro Motoki wrote:
 I would like to request dependency freeze exceptions for 
 django-openstack-auth.
 https://review.openstack.org/#/c/123101/

This arguably counts as an internal library. We are freezing those now
(well, last Friday was the deadline), so +1 on this one.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Denis Makogon
On Tue, Sep 23, 2014 at 12:40 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 Hi,
 I am able to do all volume operations through dashboard and cli commands.
 But when i am running tempest tests,some tests are getting failed.
 For contributing cinder volume driver for my client in cinder,do all
 tempest tests should passed?

 Ex:
 1)
 ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
 are getting failed


Just as Jay said, to find out what's going wrong you have to tail
cinder/nova logs while running tests.
There are two things you should keep an eye on:
1. Behaviour (to be clear, does test scenarios coresponds to what actually
happening).
2. Resource utilization (may cause errors because of something went wrong,
going back to log analyzation)


 But when i am running individual tests in test_volumes_snapshots,all
 tests are getting passed.

 2)
 ./run_tempest.sh
 tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
 This is also getting failed.



 Regards
 Nikesh

 On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  Sent: Saturday, September 20, 2014 9:49 PM
  To: openst...@lists.openstack.org; OpenStack Development Mailing List
 (not for usage questions)
  Subject: Re: [Openstack] No one replying on tempest issue?Please share
 your experience
 
  Still i didnot get any reply.

 Jay has already replied to this mail, please check the nova-compute
 and cinder-volume log as he said[1].

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

  Now i ran below command:
  ./run_tempest.sh
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot
 
  and i am getting test failed.
 
 
  Actually,after analyzing tempest.log,i found that:
  during creation of a volume from snapshot,tearDownClass is called and
 it is deleting snapshot bfore creation of volume
  and my test is getting failed.

 I guess the failure you mentioned at the above is:

 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
 [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
 0.029s

 and

 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
 [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
 0.034s

 right?
 If so, that is not a problem.
 VolumesSnapshotTest creates two volumes, and the tearDownClass checks
 these
 volumes deletions by getting volume status until 404(NotFound) [2].

 [2]:
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128

  I deployed a juno devstack setup for a cinder volume driver.
  I changed cinder.conf file and tempest.conf file for single backend and
 restarted cinder services.
  Now i ran tempest test as below:
  /opt/stack/tempest/run_tempest.sh
 tempest.api.volume.test_volumes_snapshots
 
  I am getting below output:
   Traceback (most recent call last):
File
 /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py, line
 176, in test_volume_from_snapshot
  snapshot = self.create_snapshot(self.volume_origin['id'])
File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in
 create_snapshot
  'available')
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 126, in wait_for_snapshot_status
  value = self._get_snapshot_status(snapshot_id)
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 99, in _get_snapshot_status
  snapshot_id=snapshot_id)
  SnapshotBuildErrorException: Snapshot
 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in
  ERROR status

 What happens if running the same operation as Tempest by hands on your
 environment like the following ?

 [1] $ cinder create 1
 [2] $ cinder snapshot-create id of the created volume at [1]

 [3] cinder snapshot-show id for [2] (see if snapshot was baked properly)

 [3] $ cinder create --snapshot-id id of the created snapshot at [2] 1
 [4] $ cinder show id of the created volume at [3]

 Please check whether the status of created volume at [3] is available
 or not.

 Thanks
 Ken'ichi Ohmichi

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Best regards,

[openstack-dev] [Zaqar] Zaqar's path cleared out

2014-09-23 Thread Flavio Percoco
Greetings,

I recently published a post[0] about Zaqar's path going forward with the
hope to clarify a bit the project's vision and guarantees as a service -
I didn't mention API features.

Since we've been talking about this lately, I thought to share the post
here to ease the discussion and to kinda force you all to read it :P -
for better or for worse. :D

I'm going to shamelessly copy/paste the post here but I'll strip out the
intro since it talks about the last review process and we've enough of that.

[0] http://blog.flaper87.com/post/zaqar-path-going-forward/

Here it goes:

[SNIIP]

All the above discussions have been interesting but I'd like to take a
step back and walk you through a perhaps less technical topic but not
less important. It's clear to me that not everyone knows what the
project's vision is. So far, we've made clear what Zaqar's API goals
are, what kind of service Zaqar is and the use-cases it tries to address
but we haven't neither explicitly explained nor documented well-enough
what Zaqar's scalability goals are, what guarantees from a storage
perspective it gives nor how much value the project is putting on things
like interoperability.

Zaqar has quite a set of features that give operators enough flexibility
to achieve different scales and/or adapt it to their know-how and very
specific needs. Something we've - or at least I have - always said about
Zaqar - for better or for worse - is that you can play with its layers
as if they were Lego bricks. I still think this is true and it doesn't
mean Zaqar is trying to address *all* the use cases or making *everyone*
happy. We want to give them flexibility to add functionality for
additional use cases that aren't supported out of the box.. I know this
has lots of implications, I'll dig into it a bit more later.

*Zaqar's vision is to provide a cross-cloud interoperable,
fully-reliable messaging service at scale that is both, easy and not
invasive, for deployers and users.*

It goes with no saying that the service (and team) has strived to
achieve the above since the very beginning and I believe it does that,
modulo bugs/improvements, right now.

Reliability
===

Zaqar aims to be a fully-reliable service, therefore messages should
never be lost under any circumstances except for when the message's
expiration time (ttl) is reached - messages will not be around for ever
(unless you explicitly request that). As of now, Zaqar's reliability
guarantee relies on the storage ability to do so and on the service to
be properly configured.

For example, if Zaqar is deployed on top of MongoDB - the current
recommended storage for production - you'd likely do it by configuring a
replica set or even a sharded cluster so that every message is
replicated but if you use a single mongod instance, there's nothing the
service can do to guarantee reliability. Well, there actually is, Zaqar
could force deployers to configure either a replica set or a sharded
cluster and die if they don't - we will likely force deployers.


Scalability
===

Zaqar's design was thought at scale. Not all storage technologies will
be able to perform the same way under massive loads, hence it's really
important to choose a storage backend capable of supporting the expected
user base.

That said, Zaqar also has some built-in scaling capabilities that aim to
make scaling storage technologies easier and push their limits farther
away. Zaqar's pools allow users to scale their storage layer by adding
new storage clusters to it and balancing queues across them.

For example, if you have a zaqar+mongodb deployment and your mongodb
cluster (regardless it is a replica set or a sharded cluster) reaches
it's scaling limits, it'd be possible to setup a new mongodb cluster and
add it as a pool in Zaqar. Zaqar will then balance queues based on the
pools' weight across your storage clusters.

Although the above may sound like a quite naive feature, it is not. The
team is aware of the limitations related to pools and the things left to
do to make it less so. Let me walk you through some of these things.

One thing that you may  have spotted from the above is that pools work
in a per-queue basis, which means there's no way to split queues across
several storage clusters. This could be an issue for *huge* queues and
it could make it more difficult to keep pools load balanced.
Nonetheless, I still think it makes sense to keep it this way and here's
why.

By balancing on queues and not messages, we're leaving the work of
replicating and balancing messages to the technologies that have been
doing it for years. This falls perfectly into Zaqar's will of relying as
much as possible on the storage technology without re-inventing the
wheel (nothing bad about the later, though). Though, I'd like to go a
bit further than just the service wants to rely on existing technologies.

Messages (data) distribution is not an easy task. I had the pleasure (?)
to work on the core of 

Re: [openstack-dev] [Nova] Deprecating localfs?

2014-09-23 Thread Daniel P. Berrange
On Tue, Sep 23, 2014 at 09:53:36AM +1000, Michael Still wrote:
 Hi.
 
 I know we've been talking about deprecating nova.virt.disk.vfs.localfs
 for a long time, in favour of wanting people to use libguestfs
 instead. However, I can't immediately find any written documentation
 for when we said we'd do that thing.
 
 Additionally, this came to my attention because Ubuntu 14.04 is
 apparently shipping a libguestfs old enough to cause us to emit the
 falling back to localfs warning, so I think we need Ubuntu to catch
 up before we can do this thing.
 
 So -- how about we remove localfs early in Kilo to give Canonical a
 release to update libguestfs?

Rather than removing localfs right away how about we add an explicit
config parameter to control behaviour

  vfs_impl=auto|guestfs|localfs

where

  auto == try libguestfs, fallback to localfs
  guestfs == try libguestfs, error otherwise
  localfs == try localfs, error otherwise

Currently we do 'auto' behaviour but in Kilo we could make 'guestfs'
be the default, so there is no fallback. That way distros will quickly
see if they have a problem, but still have option to reconfigure to
use localfs if they need to.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-23 Thread Thomas Spatzier
Excerpts from Steven Hardy's message on 23/09/2014 11:37:42:
snip
 
  On the other hand, there is a big downside to having it (only) in Heat
also
  - you're dependent on the operator deciding to provide it.
 
  - You prempt the decision about integration with any higher
levelservices,
 e.g Mistral, Murano, Solum, if you bake in the translator at the
 heat level.
 
  Not sure I understand this one.

 I meant if non-simple TOSCA was in scope, would it make sense to bake the
 translation in at the heat level, when there are aspects of the DSL which
 we will never support (but some higher layer might).

 Given Sahdev's response saying simple-profile is all that is currently in
 scope, it's probably a non-issue, I just wanted to clarify if heat was
the
 right place for this translation.

Yes, so definitely so far the scope if TOSCA simple profile which aims to
be easily mappable to Heat/HOT. Therefore, close integration makes sense
IMO.

Even if the TOSCA community focuses more on features not covered by Heat
(e.g. the plans or workflows which would rather map to Mistral), the
current decision would not preempt such movements. An overall TOSCA
description is basically modular where parts like the declarative topology
could be handed to one layer - this is what we are talking about now - and
other parts (like the flows) could go to a different layer.
So if we get there in the future, this decomposer functionality could be
addressed on-top of a layer we are working on today.

Hope this helps and does not add confusion :-)

Regards,
Thomas


 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecating localfs?

2014-09-23 Thread Michael Still
I am ok with a staged approach, although I do think its something we
should start in Kilo. That said though, we emit a warning if the
installation doesn't have a working libguestfs in juno.

Michael

On Tue, Sep 23, 2014 at 8:09 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Tue, Sep 23, 2014 at 09:53:36AM +1000, Michael Still wrote:
 Hi.

 I know we've been talking about deprecating nova.virt.disk.vfs.localfs
 for a long time, in favour of wanting people to use libguestfs
 instead. However, I can't immediately find any written documentation
 for when we said we'd do that thing.

 Additionally, this came to my attention because Ubuntu 14.04 is
 apparently shipping a libguestfs old enough to cause us to emit the
 falling back to localfs warning, so I think we need Ubuntu to catch
 up before we can do this thing.

 So -- how about we remove localfs early in Kilo to give Canonical a
 release to update libguestfs?

 Rather than removing localfs right away how about we add an explicit
 config parameter to control behaviour

   vfs_impl=auto|guestfs|localfs

 where

   auto == try libguestfs, fallback to localfs
   guestfs == try libguestfs, error otherwise
   localfs == try localfs, error otherwise

 Currently we do 'auto' behaviour but in Kilo we could make 'guestfs'
 be the default, so there is no fallback. That way distros will quickly
 see if they have a problem, but still have option to reconfigure to
 use localfs if they need to.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecating localfs?

2014-09-23 Thread Roman Bogorodskiy
  Michael Still wrote:

 Hi.
 
 I know we've been talking about deprecating nova.virt.disk.vfs.localfs
 for a long time, in favour of wanting people to use libguestfs
 instead. However, I can't immediately find any written documentation
 for when we said we'd do that thing.
 
 Additionally, this came to my attention because Ubuntu 14.04 is
 apparently shipping a libguestfs old enough to cause us to emit the
 falling back to localfs warning, so I think we need Ubuntu to catch
 up before we can do this thing.
 
 So -- how about we remove localfs early in Kilo to give Canonical a
 release to update libguestfs?
 
 Thoughts appreciated,
 Michael

If at some point we'd start moving into getting FreeBSD supported as a
host OS for OpenStack, then it would make sense to keep localfs for that
configuration.

libguestfs doesn't work on FreeBSD yet. On the other hand, localfs
code in Nova doesn't look like it'd be hard to port.

Roman Bogorodskiy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon] Dependency freeze exceptions: django-openstack-auth

2014-09-23 Thread Akihiro Motoki
Sorry for the delay. django-openstack-auth 1.1.7 was released last Friday and
it seems David forgot to update global requirements.

Now let me clarify how to handle this kind of bugs which requires a
fix in a library after the release.
What is the appropriate way to fix a bug depending on libraries
(including internal and external libraries) after the release.
Can we mark the bug as Fixed even if operators use a newer released
version of libraries is used?
In this case the bug itself is not fixed without upgrading the libraries,
but we cannot update the requirement after the release in general.
Another way is to copy the logic from a library a workaround but it is
a bit ugly.
Of course we need to keep compatibility for the declared version in
requirement.txt.

Thanks,
Akihiro


On Tue, Sep 23, 2014 at 6:47 PM, Thierry Carrez thie...@openstack.org wrote:
 Akihiro Motoki wrote:
 I would like to request dependency freeze exceptions for 
 django-openstack-auth.
 https://review.openstack.org/#/c/123101/

 This arguably counts as an internal library. We are freezing those now
 (well, last Friday was the deadline), so +1 on this one.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Christopher Yeoh
On Mon, 22 Sep 2014 09:47:50 -0500
Anne Gentle a...@openstack.org wrote:
 
  (1) Input/Output attribute names
  (1.1) These names should be snake_case.
eg: imageRef - image_ref, flavorRef - flavor_ref, hostId -
  host_id (1.2) These names should contain extension names if they
  are provided in case of some extension loading.
eg: security_groups - os-security-groups:security_groups
config_drive - os-config-drive:config_drive
 
 
 Do you mean that the os- prefix should be dropped? Or that it should
 be maintained and added as needed?


Originally (I think) the os- prefix was added to avoid potential
name clashes between extensions. Presumably out of tree extensions
too as in-tree ones would be picked up pretty quickly. For a while now
I've been thinking that perhaps we should just drop the os- prefix.
We're trying to discourage optional extensions anyway and I suspect that
the pain of maintaining consistency with os- prefixes is not worth the
benefit.

Where if someone wants to maintain an out of tree plugin they
can bear the burden of avoiding name clashes. This would make it much
easier for both us (nearly no code changes required) and users (no app
changes) when we move some functionality into (or out of) core.

  (1.3) Extension names should consist of hyphens and low chars.
eg: OS-EXT-AZ:availability_zone -
  os-extended-availability-zone:availability_zone
OS-EXT-STS:task_state - os-extended-status:task_state
 
 
 Yes, I don't like the shoutyness of the ALL CAPS.

+1! No to ALL CAPS and contractions or disemvowling of words in order
to save a few bytes.

  (1.4) Extension names should contain the prefix os- if the
  extension is not core.
eg: rxtx_factor - os-flavor-rxtx:rxtx_factor
os-flavor-access:is_public - flavor-access:is_public
  (flavor-access extension became core)
 
 
 Do we have a list of core yet?

We do sort of have a candidate listed hard coded (its a pretty small
list):

API_V3_CORE_EXTENSIONS = set(['consoles',
  'extensions',
  'flavor-access',
  'flavor-extra-specs',
  'flavor-manage',
  'flavors',
  'ips',
  'os-keypairs',
  'server-metadata',
  'servers',
  'versions'])

 
  (3) Status code
  (3.1) Return 201(Created) if a resource creation/updating finishes
  before returning a response.
eg: create a keypair API: 200 - 201
create an agent API: 200 - 201
create an aggregate API: 200 - 201
 
 
 Do you mean a 200 becomes a 201? That's a huge doc impact and SDK
 impact, is it worthwhile? If we do this change, the sooner the
 better, right?

Ideally I think we'd want to do it when breaking a specific part of the
API anyway (for say some new feature). Otherwise its something I think
we should bring up on a case by case with users and operators. Trade
off between returning misleading status codes versus requiring
application side changes (potentially, some may just look for 2xx
anyway).

 
 
 The TC had an action item a while back (a few months) to start an API
 style guide. This seems like a good start. Once the questions are
 discussed I'll get a draft going on the wiki.

So back during the Juno design summit at the cross project API
consistency session we talked about putting together a wiki page with
guidelines for all OpenStack projects. But I think everyone got busy so
not much at all happened. I've had a bit of time recently and tried to
pull together info from the session as well as some other sources and
the start of it is here:

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

So perhaps we could merge what is above into this wiki?
It is however still rather Nova specific and we need input from
other projects (I'll send out another email specifically about this). 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Christopher Yeoh
On Mon, 22 Sep 2014 15:27:47 -0500
Brant Knudson b...@acm.org wrote:

 
 
 Did you consider JSON Home[1] for this? For Juno we've got JSON Home
 support in Keystone for Identity v3 (Zaqar was using it already). We
 weren't planning to use it for microversioning since we weren't
 planning on doing microversioning, but I think JSON Home could be
 used for this purpose.
 
 Using JSON Home, you'd have relationships that include the version,
 then the client can check the JSON Home document to see if the server
 has support for the relationship the client wants to use.
 

JSON Home is on our list of things to do for the API (but
the priority is lower than microversions). We still need
microversions anyway because we want to be able to support multiple
versions of API through the same resources (ie no url changes). To
make life as easy as possible for users so they are not forced to
upgrade their app whenever we make backwards incompatible changes. I
don't see us supporting every version forever, but it gives them some
time to upgrade.

Regards,

Chris
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] REST API style guide (Re: [Nova] Some ideas for micro-version implementation)

2014-09-23 Thread Christopher Yeoh
On Tue, 23 Sep 2014 09:00:26 +0900
Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote:
\ 
 So how about just using HTTP 200(OK) only for status codes?
 That would give up providing accurate internal status to clients but
 backwards backwards incompatibilities never happen.


No I think that we should where possible return the most accurate
status code. A 202 versus 200 is an important distinction for a user of
the API (eg do they need to poll for request completion?). How fast
we can get to accurate status codes through the API is a different
matter though. 

 
 and I have one more idea for making API consistency of whole
 OpenStack projects. That is each rule of the style guide is
 implemented in Tempest. Tempest has its own REST clients for many
 projects and we can customize them for improving qualities. After
 defining the REST API style guide, we can add each
 rule to Tempest's base client class and apply it for all REST APIs
 which are tested
 by Tempest. We can keep consistent API for the existing projects and
 apply the style guide to new projects also by this framework.

That's an interesting idea! However we would have a long exception list
for a quite a while I think as it was made pretty clear to use we can't
make a large number of backwards compatible API changes over a short
period of time. (Like a v2-v3 transition was).

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecating localfs?

2014-09-23 Thread Daniel P. Berrange
On Tue, Sep 23, 2014 at 02:27:52PM +0400, Roman Bogorodskiy wrote:
   Michael Still wrote:
 
  Hi.
  
  I know we've been talking about deprecating nova.virt.disk.vfs.localfs
  for a long time, in favour of wanting people to use libguestfs
  instead. However, I can't immediately find any written documentation
  for when we said we'd do that thing.
  
  Additionally, this came to my attention because Ubuntu 14.04 is
  apparently shipping a libguestfs old enough to cause us to emit the
  falling back to localfs warning, so I think we need Ubuntu to catch
  up before we can do this thing.
  
  So -- how about we remove localfs early in Kilo to give Canonical a
  release to update libguestfs?
  
  Thoughts appreciated,
  Michael
 
 If at some point we'd start moving into getting FreeBSD supported as a
 host OS for OpenStack, then it would make sense to keep localfs for that
 configuration.
 
 libguestfs doesn't work on FreeBSD yet. On the other hand, localfs
 code in Nova doesn't look like it'd be hard to port.

Yep, that's a good point and in fact applies to Linux too when considering
the non-KVM/QEMU drivers libvirt supports. eg if your host does not have
virtualization and you're using LXC for container virt, then we need to 
have localfs still be present. Likewise if running Xen.

So we definitely cannot delete or even deprecate it unconditionally. We
simply want to make sure localfs isn't used when Nova is configured to
run QEMU/KVM via libvirt.

So if we take the config option approach I suggested, then we'd set a
default value for the vfs_impl parameter according to which libvirt
driver you have enabled.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] PTL Non-Candidacy

2014-09-23 Thread stuart . mclaren

Hi Mark,

Many thanks for your leadership, and keeping glance so enjoyable to work on 
over the last few cycles.

-Stuart


From: Mark Washenberger mark.washenber...@markwash.net
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] PTL Non-Candidacy
Message-ID:
CAJyP2C_zR5oK-=dno1e5wv-qsimfqplckzfbl+rhg6-7fhd...@mail.gmail.com
Content-Type: text/plain; charset=utf-8

Greetings,

I will not be running for PTL for Glance for the Kilo release.

I want to thank all of the nice folks I've worked with--especially the
attendees and sponsors of the mid-cycle meetups, which I think were a major
success and one of the highlights of the project for me.

Cheers,
markwash
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20140922/85b17570/attachment-0001.html

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] DNS Services PTL Candidacy

2014-09-23 Thread Mac Innes, Kiall
Hi all,


I'd like to announce my candidacy for the DNS Services Program PTL position.


I've been involved in Designate since day one, as the both original author and 
as pseudo-PTL pre-incubation. Designate and the DNS Services program have come 
a long way since the project was first introduced to StackForge under my lead, 
and while we're far from done, I feel I'm more than capable and best positioned 
to bring the project through to fruition.

Additionally, I manage the team at HP running the largest known public and 
production deployment of Designate for HP Cloud - Giving me the operational 
experience necessary to guide the project towards meeting real world 
operational requirements.

Thanks - Kiall

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Sean Dague
On 09/23/2014 02:56 AM, Thierry Carrez wrote:
 Hi everyone,
 
 Rather than waste a design summit session about it, I propose we review
 the proposed Kilo release cycle schedule on the mailing-list. See
 attached PDF for a picture of the proposal.
 
 The hard date the proposal is built around is the next (L) Design
 Summit week: May 18-22. That pushes back far into May (the farthest
 ever). However the M design summit being very likely to come back in
 October, the release date was set to April 23 to smooth the difference.
 
 That makes 3 full weeks between release and design summit (like in
 Hong-Kong), allowing for an official off-week on the week of May 4-8.

Honestly, the off-week really isn't. If we're going to talk about
throwing a stall week into the dev cycle for spacing, I'd honestly
rather just push the release back, and be clear to folks that the summer
cycle is going to be shorter. The off-week I think just causes us to
lose momentum.

 The rest of the proposal is mostly a no-brainer. Like always, we allow a
 longer time for milestone 2, to take into account the end-of-year
 holidays. That gives:
 
 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11

The one thing that felt weird about the cadence of the 1st milestone in
Havana last year was that it was super start / stop. Dec 11 means that
we end up with 2 weeks until christmas, so many people are starting to
wind down. My suggestion would be to push K1 to Dec 18, because I think
you won't get much K2 content landed that week anyway.

For US people at least this would change the Dec cadence from:

* Holiday Week - Nov 28 is thanksgiving
* Dev Week
* Milestone Week
* Dev Week
* sort of Holiday Week
* Holiday Week

To:

* Holiday Week
* Dev Week
* Dev Week
* Milestone Week
* sort of Holiday Week
* Holiday Week

Which I feel is going to get more done. If we take back the off week, we
could just shift everything back a week, which makes K2 less split by
christmas.

 Kilo-2 milestone: Jan 29
 Kilo-3 milestone, feature freeze: March 12
 2015.1 (Kilo) release: Apr 23
 L Design Summit: May 18-22
 
 All milestones avoid known US holiday weekends. Let me know if I missed
 something or if you see major issues with this proposal.
 
 Regards,
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Daniel P. Berrange
On Tue, Sep 23, 2014 at 07:42:30AM -0400, Sean Dague wrote:
 On 09/23/2014 02:56 AM, Thierry Carrez wrote:
  Hi everyone,
  
  Rather than waste a design summit session about it, I propose we review
  the proposed Kilo release cycle schedule on the mailing-list. See
  attached PDF for a picture of the proposal.
  
  The hard date the proposal is built around is the next (L) Design
  Summit week: May 18-22. That pushes back far into May (the farthest
  ever). However the M design summit being very likely to come back in
  October, the release date was set to April 23 to smooth the difference.
  
  That makes 3 full weeks between release and design summit (like in
  Hong-Kong), allowing for an official off-week on the week of May 4-8.
 
 Honestly, the off-week really isn't. If we're going to talk about
 throwing a stall week into the dev cycle for spacing, I'd honestly
 rather just push the release back, and be clear to folks that the summer
 cycle is going to be shorter. The off-week I think just causes us to
 lose momentum.

I didn't really notice anyone stop working in the last off-week we
had. Changes kept being submitted at normal rate, people kept talking
on IRC, etc, etc. An off-week is a nice idea in theory but it didn't
seem to have much effect in practice AFAICT.

  The rest of the proposal is mostly a no-brainer. Like always, we allow a
  longer time for milestone 2, to take into account the end-of-year
  holidays. That gives:
  
  Kilo Design Summit: Nov 4-7
  Kilo-1 milestone: Dec 11
 
 The one thing that felt weird about the cadence of the 1st milestone in
 Havana last year was that it was super start / stop. Dec 11 means that
 we end up with 2 weeks until christmas, so many people are starting to
 wind down. My suggestion would be to push K1 to Dec 18, because I think
 you won't get much K2 content landed that week anyway.
 
 For US people at least this would change the Dec cadence from:
 
 * Holiday Week - Nov 28 is thanksgiving
 * Dev Week
 * Milestone Week
 * Dev Week
 * sort of Holiday Week
 * Holiday Week
 
 To:
 
 * Holiday Week
 * Dev Week
 * Dev Week
 * Milestone Week
 * sort of Holiday Week
 * Holiday Week
 
 Which I feel is going to get more done. If we take back the off week, we
 could just shift everything back a week, which makes K2 less split by
 christmas.
 
  Kilo-2 milestone: Jan 29
  Kilo-3 milestone, feature freeze: March 12
  2015.1 (Kilo) release: Apr 23
  L Design Summit: May 18-22

I find it kind of wierd that we have 1 month gap between Kilo being
released and the L design summit taking place. The design summits are
supposed to be where we talk about  agree the big themes of the
release, but we've already had 4+ weeks of working on the release by
the time the summit takes place ?!?! Not to mention that we branched
even before that, so work actually takes place even further in advance
of the design summit. I feel this is a contributing factor to the first
milestone being comparatively less productive than the later milestones
and causes big work to get pushed out later in the cycle. We really want
to try and encourage work to happen earlier in the cycles to avoid the
big crunch in m3. Is there a reason for this big gap between release
and summit, or can we let Kilo go longer by 2-3 weeks, so we have better
alignment between design summit happening  milestone1 dev cycle start ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Could we please use 1.4.1 for oslo.messaging *now*? (was: Oslo final releases ready)

2014-09-23 Thread Davanum Srinivas
Zigo,

Ouch! Can you please open a bug in oslo.messaging, i'll mark it critical

thanks,
dims

On Tue, Sep 23, 2014 at 4:35 AM, Thomas Goirand z...@debian.org wrote:
 On 09/18/2014 10:04 PM, Doug Hellmann wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.

 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)

 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!

 Doug

 Doug,

 Here in Debian, I have a *huge* mess with versionning with oslo.messaging.

 tl;dr: Because of that version number mess, please add a tag 1.4.1 to
 oslo.messaging now and use it everywhere instead of 1.4.0.

 Longer version:

 What happened is that Chuck released a wrong version of Keystone (eg:
 the trunk rather than the stable branch). Therefore, I uploaded a
 version 1.4.0 beta version of olso.messaging in Debian Unstable/Jessie,
 because I thought the Icehouse version of Keystone needed it. (Sid /
 Jessie is supposed to keep Icehouse stuff only.)

 That would have been about fine, if only I didn't upgraded
 oslo.messaging to the last version in Sid, because I didn't want to keep
 a beta release in Jessie. Though this last version depends on
 oslo.config 1.4.0.0~a5, then probably even more.

 So I reverted the 1.4.0.0 upload in Debian Sid, by uploading version
 1.4.0.0+really+1.3.1, which as its name may suggest, really is a 1.3.1
 version (I did that to avoid having an EPOC and need to re-upload
 updates of all reverse dependencies of oslo.messaging). That's fine,
 we're covered for Sid/Jessie.

 But then, the Debian Experimental version of oslo.messaging is lower
 than the one in Sid/Jessie, so I have breakage there.

 If we declare a new 1.4.1, and have this fixed in our
 global-requirements.txt, then everything goes back in order for me, I
 get back on my feets. Otherwise, I'll have to deal with this, and make
 fake version numbers which will not match anything real released by
 OpenStack, which may lead to even more mistakes.

 So, could you please at least:
 - add a git tag 1.4.1 to oslo.messaging right now, matching 1.4.0

 This will make sure that nobody will use 1.4.1 again, and that I'm fine
 using this version number in Debian Experimental, which will be higher
 than the one in Sid.

 And then, optionally, it would help me if you could (but I can leave
 without it):
 - Use 1.4.1 for oslo.messaging in global-requirements.txt
 - Have every project that needs 1.4.0 bump to 1.4.1 as well

 This would be a lot less work than for me to declare an EPOC in the
 oslo.messaging package, and fix all reverse dependencies. The affected
 packages for Juno for me are:
 - ceilometer
 - cinder
 - designate
 - glance
 - heat
 - ironic
 - keystone
 - neutron
 - nova
 - oslo-config
 - oslo.rootwrap
 - oslo.i18n
 - python-pycadf

 I'd have to upload updates for all of them even if we use 1.4.1 instead
 of using an EPOC (eg: 1:1.4.0), but that's still much better for me to
 use 1.4.1 than an EPOC. EPOC are ugly (because not visible in file
 names) and confusing (it's easy to forget them), and non-reversible, so
 I'd like to avoid it if possible.

 I'm sorry for the mess and added work.
 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-23 Thread Geoff O'Callaghan
On 23/09/2014 12:58 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Geoff O'Callaghan's message of 2014-09-22 17:30:47 -0700:
  On 23/09/2014 1:59 AM, Clint Byrum cl...@fewbar.com wrote:
  
   Geoff, do you expect all of our users to write all of their messaging
   code in Python?
  
   oslo.messaging is a _python_ library.
  
   Zaqar is a service with a REST API -- accessible to any application.
 
  No I do not.   I am suggesting thaf a well designed, scalable and robust
  messaging layer can meet the requirements of both as well as a number of
  other openstack servuces.  How the messaging layer is consumed isn't the
  issue.
 
  Below is what I originally posted.
 

It seems to my casual view that we could have one and scale that and
  use it
for SQS style messages, internal messaging (which could include
logging)
all managed by message schemas and QoS.  This would give a very
robust
  and
flexible system for endpoints to consume.
   
Is there a plan to consolidate?
 

 Sorry for the snark George. I was very confused by the text above, and
 I still am. I am confused because consolidation requires commonalities,
 of which to my mind, there are almost none other than the relationship
 to the very abstract term messaging.

Hahaha.. now you're calling me george ;)   Don't worry dude.  I'm only
joking and I even liked sharknado.   Any way I was hoping zaqar had a
greater scope than it appears to have.   I'll watch the progress.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Dynamically load service provider

2014-09-23 Thread Germy Lure
Hi stackers,

I have an idea about service provider framework. Anyone interested in this
topic can give me some suggestions.

My idea is that providers report their services capability dynamically not
configured in neutron.conf. See details by the link below.
https://docs.google.com/presentation/d/1_uNF0JEDyoFor8xj-MaaacPL334hiWJWB7NzfRrcVJg/edit?usp=sharing

Everyone can comment this doc.

BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Alex Xu

On 2014年09月23日 17:12, Christopher Yeoh wrote:

On Mon, 22 Sep 2014 09:29:26 +
Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

Before discussing how to implement, I'd like to consider what we
should implement. IIUC, the purpose of v3 API is to make consistent
API with the backwards incompatible changes. Through huge discussion
in Juno cycle, we knew that backwards incompatible changes of REST
API would be huge pain against clients and we should avoid such
changes as possible. If new APIs which are consistent in Nova API
only are inconsistent for whole OpenStack projects, maybe we need to
change them again for whole OpenStack consistency.

So I think there's three different aspects to microversions which we
can consider quit separately:

- The format of the client header and what the version number means.
   Eg is the version number of the format X.Y.Z, what do we increment
   when we make a bug fix, what do we increment when we make a backwards
   compatible change and what do we increment when we make backwards
   incompatible change.

   Also how does a client request experimental APIs (I believe we have
   consensus that we really need this to avoid backwards incompatible
   changes as much as possible as it allows more testing before
   guaranteeing backwards compatibility)

   I believe that we can consider this part separately from the next two
   issues.

- The implementation on the nova api side. Eg how do we cleanly handle
   supporting multiple versions of the api based on the client header
   (or lack of it which will indicate v2 compatibility. I'll respond
   directly on Alex's original post

- What we are going to use the microversions API feature to do. I think
   they fall under a few broad categories:

   - Backwards compatible changes. We desperately need a mechanism that
 allows us to make backwards compatible changes (eg add another
 parameter to a response) without having to add another dummy
 extension.

   - Significant backwards incompatible changes. The Tasks API and server
 diagnostics API are probably the best examples of this.

   - V3 like backwards incompatible changes (consistency fixes).

I think getting consensus over backwards compatible changes will be
straightforward. However given the previous v2/v3 discussions I don't
think we will be able to get consensus over doing all or most of the
consistency type fixes even using microversions in the short term.
Because with microversions you get all the changes applied before the
version that you choose. So from a client application point of view its
just as much work as V2 to V3 API transition.

I don't think that means we need to put all of these consistency
changes off forever though. We need to make backwards incompatible
changes in order to implement the Tasks API  and new server
diagnostics api the way we want to. The Tasks API will eventually cover
quite a few interfaces and while say breaking backwards compatibility
with the create server api, we can also fix consistency issues in that
api at the same time. Clients will need to make changes to their app
anyway if they want to take advantage of the new features (or they can
just continue to use the old non-tasks enabled API).

So as we slowly make backwards incompatible changes to the API for
other reasons we can also fix up other issues. Other consistency fixes
we can propose on a case by case basis and the user community can have
input as to whether the cost (app rework) is worth it without getting a
new feature at the same time.


Agree, consistency fixes should depend on the whether the cost is worth 
it or not, maybe we can't fix some inconsistency issues.
And definitely need micro-version for add tasks API and new server 
diagnostics. Also need micro-version to fix some bugs,
like https://bugs.launchpad.net/nova/+bug/1320754 and 
https://bugs.launchpad.net/nova/+bug/1333494.




But I think its clear that we *need* the microversions mechanism. So we
don't need to decide beforehand exactly what we're going to use it for
first. I think think its more important that we get a nova-spec
approved for the the first two parts - what it looks like from the
client point of view. And how we're going to implement it.

Regards,

Chris


For avoiding such situation, I think we need to define what is
consistent REST API across projects. According to Alex's blog, The
topics might be

  - Input/Output attribute names
  - Resource names
  - Status code

The following are hints for making consistent APIs from Nova v3 API
experience, I'd like to know whether they are the best for API
consistency.

(1) Input/Output attribute names
(1.1) These names should be snake_case.
   eg: imageRef - image_ref, flavorRef - flavor_ref, hostId -
host_id (1.2) These names should contain extension names if they are
provided in case of some extension loading. eg: security_groups -
os-security-groups:security_groups config_drive -
os-config-drive:config_drive (1.3) Extension names 

Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Flavio Percoco
On 09/23/2014 10:58 AM, Gordon Sim wrote:
 On 09/22/2014 05:58 PM, Zane Bitter wrote:
 On 22/09/14 10:11, Gordon Sim wrote:
 As I understand it, pools don't help scaling a given queue since all the
 messages for that queue must be in the same pool. At present traffic
 through different Zaqar queues are essentially entirely orthogonal
 streams. Pooling can help scale the number of such orthogonal streams,
 but to be honest, that's the easier part of the problem.

 But I think it's also the important part of the problem. When I talk
 about scaling, I mean 1 million clients sending 10 messages per second
 each, not 10 clients sending 1 million messages per second each.
 
 I wasn't really talking about high throughput per producer (which I
 agree is not going to be a good fit), but about e.g. a large number of
 subscribers for the same set of messages, e.g. publishing one message
 per second to 10,000 subscribers.
 
 Even at much smaller scale, expanding from 10 subscribers to say 100
 seems relatively modest but the subscriber related load would increase
 by a factor of 10. I think handling these sorts of changes is also an
 important part of the problem (though perhaps not a part that Zaqar is
 focused on).
 
 When a user gets to the point that individual queues have massive
 throughput, it's unlikely that a one-size-fits-all cloud offering like
 Zaqar or SQS is _ever_ going to meet their needs. Those users will want
 to spin up and configure their own messaging systems on Nova servers,
 and at that kind of size they'll be able to afford to. (In fact, they
 may not be able to afford _not_ to, assuming per-message-based pricing.)
 
 [...]
 If scaling the number of communicants on a given communication channel
 is a goal however, then strict ordering may hamper that. If it does, it
 seems to me that this is not just a policy tweak on the underlying
 datastore to choose the desired balance between ordering and scale, but
 a more fundamental question on the internal structure of the queue
 implementation built on top of the datastore.

 I agree with your analysis, but I don't think this should be a goal.
 
 I think it's worth clarifying that alongside the goals since scaling can
 mean different things to different people. The implication then is that
 there is some limit in the number of producers and/or consumers on a
 queue beyond which the service won't scale and applications need to
 design around that.

Agreed. The above is not part of Zaqar's goals. That is to say that each
store knows best how to distribute reads and writes itself. Nonetheless,
drivers can be very smart about this and be implemented in ways they'd
take the most out of the backend.


 Note that the user can still implement this themselves using
 application-level sharding - if you know that in-order delivery is not
 important to you, then randomly assign clients to a queue and then poll
 all of the queues in the round-robin. This yields _exactly_ the same
 semantics as SQS.
 
 You can certainly leave the problem of scaling in this dimension to the
 application itself by having them split the traffic into orthogonal
 streams or hooking up orthogonal streams to provide an aggregated stream.
 
 A true distributed queue isn't entirely trivial, but it may well be that
 most applications can get by with a much simpler approximation.
 
 Distributed (pub-sub) topic semantics are easier to implement, but if
 the application is responsible for keeping the partitions connected,
 then it also takes on part of the burden for availability and redundancy.
 
 The reverse is true of SQS - if you want FIFO then you have to implement
 re-ordering by sequence number in your application. (I'm not certain,
 but it also sounds very much like this situation is ripe for losing
 messages when your client dies.)

 So the question is: in which use case do we want to push additional
 complexity into the application? The case where there are truly massive
 volumes of messages flowing to a single point?  Or the case where the
 application wants the messages in order?
 
 I think the first case is more generally about increasing the number of
 communicating parties (publishers or subscribers or both).
 
 For competing consumers ordering isn't usually a concern since you are
 processing in parallel anyway (if it is important you need some notion
 of message grouping within which order is preserved and some stickiness
 between group and consumer).
 
 For multiple non-competing consumers the choice needn't be as simple as
 total ordering or no ordering at all. Many systems quite naturally only
 define partial ordering which can be guaranteed more scalably.
 
 That's not to deny that there are indeed cases where total ordering may
 be required however.
 
 I'd suggest both that the former applications are better able to handle
 that extra complexity and that the latter applications are probably more
 common. So it seems that the Zaqar team made a good decision.
 
 If that was 

Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-23 Thread Duncan Thomas
On 22 September 2014 21:49, Mark McClain m...@mcclain.xyz wrote:

 Ideally, I think something that provides proper sync support should exist in
 Keystone or a Keystone related project vs multiple implementations in
 Neutron, Cinder or any other multi-tenant service that wants to provide more
 human friendly names for a vendor backend.

In general in cinder I've opposed making unnecessary calls out to
*any* REST API, keystone, glance, etc, in any common code path,
particularly cinder-api code - it makes it very easy for a blip in one
service, like keystone api, to case a cascading failure as the worker
threads end up all getting consumed blocking waiting for the external
service, then nova blocks all its api threads talking to cinder,
neutron blocks talking to nova, cinder volume actions block calling
nova, etc. Either you then run into a storm of timed out operations,
or a storm of work once the end of the blockage is removed, neither of
which is desirable.

I'd rather not have convenience labels on the backend array than
increase the risks of this sort of failure mode.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2014-09-23 Thread Tristan Cacqueray
confirmed

On 23/09/14 12:58 AM, Mike Perez wrote:
 Hello all,
 
 My name is Mike Perez, and I would like to be your next PTL for the OpenStack
 block storage project Cinder.
 
 I've been involved with the OpenStack community since October 2010. I'm
 a senior developer for Datera, which contributes to Linux Bcache and the
 Linux-IO SCSI Target (LIO) in the Linux kernel (which some Cinder setups use
 underneath for target management). Before that I was for seven years a senior
 developer for DreamHost, working on their core products and storage in their
 OpenStack public cloud.
 
 Since November 2012 I've been a core developer for Cinder. Besides code
 reviews, my main contributions include creating the v2 API, writing the v2 API
 reference and spec docs and rewriting the v1 api docs. These are contributions
 that I feel were well thought out and complete. This is exactly how I
 would like to see the future of Cinder's additional contributions and would
 like to lead the team in that direction.
 
 Over the years I've advocated for OpenStack [1][2][3][4] and its community to
 bring in more contributors by teaching the basics of Cinder's design, which
 then can be applied to a project a potential contributor is interested in.
 I also contribute to programs such as the Women Who Code event [5][6] to help
 get future OpenStack interns in the Gnome Outreach Program excited to help the
 project. I feel, as a leader, helping to build a healthy diverse community is
 key.
 
 I would like to continue to help the Cinder team with focusing on what the
 bigger picture is. Not letting us get lost in long discussion, but come to
 a consensus sooner and start making better progress now. Some of this includes
 better organizing of the blueprints and better triaging of bugs so 
 contributors
 can use tools more effectively [7]. I would also like to guide folks with 
 their
 ideas early on as something that fits with the project or not.
 
 For the Kilo release, a lot of features have a dependency on a state machine.
 I agree with the rest of Cinder contributors, this needs to happen now so
 that we can can progress forward. I have a summit session with an approach as
 discussed in the previous Cinder Mid-cycle meet up [8] to drive this important
 change forward. Lastly, rolling upgrades is being picked back up, and I will 
 be
 showing interest in reviews and discussion with helping the contributors focus
 to bring this wonderful feature forward. These are the only changes I'm
 mentioning as I'm sure we'll need bandwidth for other necessary features that
 contributors will be bringing in.
 
 I'm looking forward to continuing to grow, and the opportunity to contribute 
 to
 this project in new ways.
 
 Thanks,
 Mike Perez
 
 [1] - http://www.meetup.com/OpenStack-Northwest/events/151114422
 [2] - http://www.meetup.com/meetup-group-NjZdcegA/events/150630962
 [3] - http://www.meetup.com/OpenStack-Seattle/events/159943872
 [4] - http://www.meetup.com/openstack/events/150932582/
 [5] - http://www.meetup.com/Women-Who-Code-SF/events/195850392/
 [6] - http://www.meetup.com/Women-Who-Code-SF/events/195850922/
 [7] - http://status.openstack.org/reviews/
 [8] - https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] PTL Candidacy

2014-09-23 Thread Tristan Cacqueray
confirmed

On 23/09/14 05:10 AM, Eoghan Glynn wrote:
 
 Folks,
 
 I'd like to continue serving as Telemetry PTL for a second cycle.
 
 When I took on the role for Juno, I saw some challenges facing the
 project that would take multi-cycle efforts to resolve, so I'd like to
 have the opportunity to see that move closer to completion.
 
 Over Juno, our focus as a project has necessarily been on addressing
 the TC gap analysis. We've been successful in ensuring that the agreed
 gap coverage tasks were completed. The team made great strides in
 making the sql-alchemy driver a viable option for PoCs and small
 deployments, getting meaningful Tempest  Grenade coverage in place,
 and writing quality user- and operator-oriented documentation. This
 has addressed a portion of our usability debt, but as always we need
 to continue chipping away at that.
 
 In parallel, an arms-length effort was kicked off to look at paying
 down accumulated architectural debt in Ceilometer via a new approach
 to more lightweight timeseries data storage via the Gnocchi project.
 This was approached in such a way as to minimize the disruption to
 the core project.
 
 My vision for Kilo would be to shift our focus a bit more onto such
 longer-terms strategic efforts. Clearly we need to complete the work
 on Gnocchi and figure out the migration and co-existence issues.
 
 In addition, we started a conversation with the Monasca folks at the
 Juno summit on the commonality between the two projects. Over Kilo I
 would like to broaden and deepen the collaboration that was first
 mooted in Atlanta, by figuring out specific incremental steps around
 converging some common functional areas such as alarming. We can also
 learn from the experience of the Monasca project in getting the best
 possible performance out of TSD storage in InfluxDB, or achieving very
 high throughput messaging via Apache Kafka.
 
 There are also cross-project debts facing our community that we need
 to bring some of our focus to IME. In particular, I'm thinking here
 about the move towards taking integration test coverage back out of
 Tempest and into new project-specific functional test suites. Also the
 oft-proposed, but never yet delivered-upon, notion of contractizing
 cross-project interactions mediated by notifications.
 
 Finally, it's worth noting that our entire community has a big
 challenge ahead of it in terms of the proposed move towards a new
 layering structure. If re-elected, I would see myself as an active
 participant in that discussion, ensuring the interests of the project
 are positively represented.
 
 Cheers,
 Eoghan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence: Backing up template instead of stack

2014-09-23 Thread Anant Patil
On 23-Sep-14 09:42, Clint Byrum wrote:
 Excerpts from Angus Salkeld's message of 2014-09-22 20:15:43 -0700:
 On Tue, Sep 23, 2014 at 1:09 AM, Anant Patil anant.pa...@hp.com wrote:

 Hi,

 One of the steps in the direction of convergence is to enable Heat
 engine to handle concurrent stack operations. The main convergence spec
 talks about it. Resource versioning would be needed to handle concurrent
 stack operations.

 As of now, while updating a stack, a backup stack is created with a new
 ID and only one update runs at a time. If we keep the raw_template
 linked to it's previous completed template, i.e. have a back up of
 template instead of stack, we avoid having backup of stack.

 Since there won't be a backup stack and only one stack_id to be dealt
 with, resources and their versions can be queried for a stack with that
 single ID. The idea is to identify resources for a stack by using stack
 id and version. Please let me know your thoughts.


 Hi Anant,

 This seems more complex than it needs to be.

 I could be wrong, but I thought the aim was to simply update the goal state.
 The backup stack is just the last working stack. So if you update and there
 is already an update you don't need to touch the backup stack.

 Anyone else that was at the meetup want to fill us in?

 
 The backup stack is a device used to collect items to operate on after
 the current action is complete. It is entirely an implementation detail.
 
 Resources that can be updated in place will have their resource record
 superseded, but retain their physical resource ID.
 
 This is one area where the resource plugin API is particularly sticky,
 as resources are allowed to raise the replace me exception if in-place
 updates fail. That is o-k though, at that point we will just comply by
 creating a replacement resource as if we never tried the in-place update.
 
 In order to facilitate this, we must expand the resource data model to
 include a version. Replacement resources will be marked as current and
 to-be-removed resources marked for deletion. We can also keep all current
 - 1 resources around to facilitate rollback until the stack reaches a
 complete state again. Once that is done, we can remove the backup stack.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Backup stack is a good way to take care of rollbacks or cleanups after
the stack action is complete. By cleanup I mean the deletion of
resources that are no longer needed after the new update. It works very
well when one engine is processing the stack request and the stacks are
in memory.

As a step towards distributing the stack request processing and making
it fault-tolerant, we need to persist the dependency task graph. The
backup stack can also be persisted along with the new graph, but then
the engine has to traverse both the graphs to proceed with the operation
and later identify the resources to be cleaned-up or rolled back using
the stack id. There would be many resources for the same stack but
different stack ids.

In contrast, when we store the current dependency task graph(from the
latest request) in DB, and version the resources, we can identify those
resources that need to be rolled-back or cleaned up after the stack
operations is done, by comparing their versions. With versioning of
resources and template, we can avoid creating a deep stack of backup
stacks. The processing of stack operation can happen from multiple
engines, and IMHO, it is simpler when all the engines just see one stack
and versions of resources, instead of seeing many stacks with many
resources for each stack.

- Anant


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] DNS Services PTL Candidacy

2014-09-23 Thread Tristan Cacqueray
confirmed

On 23/09/14 07:11 AM, Mac Innes, Kiall wrote:
 Hi all,
 
 
 I'd like to announce my candidacy for the DNS Services Program PTL position.
 
 
 I've been involved in Designate since day one, as the both original author 
 and as pseudo-PTL pre-incubation. Designate and the DNS Services program have 
 come a long way since the project was first introduced to StackForge under my 
 lead, and while we're far from done, I feel I'm more than capable and best 
 positioned to bring the project through to fruition.
 
 Additionally, I manage the team at HP running the largest known public and 
 production deployment of Designate for HP Cloud - Giving me the operational 
 experience necessary to guide the project towards meeting real world 
 operational requirements.
 
 Thanks - Kiall
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Jeremy Stanley
On 2014-09-23 12:53:42 +0100 (+0100), Daniel P. Berrange wrote:
 I didn't really notice anyone stop working in the last off-week we
 had. Changes kept being submitted at normal rate, people kept talking
 on IRC, etc, etc. An off-week is a nice idea in theory but it didn't
 seem to have much effect in practice AFAICT.
[...]

The project infrastructure team used the off week to upgrade
Gerrit, because in theory nobody was using it. In practice, there
was a bit more going on that week than we expected (so we just kept
reminding everyone they should be off--mildly hypocritical of us, I
know).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] DNS Services PTL Candidacy

2014-09-23 Thread Rich Megginson

On 09/23/2014 05:11 AM, Mac Innes, Kiall wrote:


Hi all,

I'd like to announce my candidacy for the DNS Services Program PTL 
position.


I’ve been involved in Designate since day one, as the both original 
author and as pseudo-PTL pre-incubation. Designate and the DNS 
Services program have come a long way since the project was first 
introduced to StackForge under my lead, and while we’re far from done, 
I feel I’m more than capable and best positioned to bring the project 
through to fruition.


Additionally, I manage the team at HP running the largest known public 
and production deployment of Designate for HP Cloud – Giving me the 
operational experience necessary to guide the project towards meeting 
real world operational requirements.


Thanks – Kiall



+1 - Kiall is very knowledgeable about DNS and has brought Designate a 
long way in a short time.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] PTL Non-Candidacy

2014-09-23 Thread Brian Rosmaita
I am really sorry to see you step down, but congratulations!

cheers,
brian

From: Mark Washenberger [mark.washenber...@markwash.net]
Sent: Monday, September 22, 2014 12:22 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance] PTL Non-Candidacy

Greetings,

I will not be running for PTL for Glance for the Kilo release.

I want to thank all of the nice folks I've worked with--especially the 
attendees and sponsors of the mid-cycle meetups, which I think were a major 
success and one of the highlights of the project for me.

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] PTL Candidacy

2014-09-23 Thread Duncan Thomas
This is an announcement of my standing for PTL of the Cinder project.

I've been an active member of the Cinder core team from day one, and
I've had the pleasure of working closely with John Griffith, the
incumbent PTL, and a large and varied collection of contributors for
that period.

My job is running block storage in HP's public cloud, as well as
working on our private cloud offering. This has give me a strong
emphasis on the day to day operational aspects of running Cinder at
scale, backward compatibility and the challenges of continuous
deployment of head-of-tree code in production.

I think Cinder is a project reaching technical maturity; it has a
strong team behind it, both core and non-core. We work well together,
and I see the main role of the PTL not as making decisions but in
enabling this community to progress as smoothly as possible. I've had
a great deal of success in driving forward the 3rd party CI
requirements, working with the infra team to work out process and
where necessary giving engineers the tools and leverage they need to
overcome roadblocks within their own companies. I feel that some
gentle shepherding in terms of review focus can help us increase our
velocity without disrupting the very successful way of working we
currently have.

The hugely successful mid-cycle meetup set the main goal of the Kilo
cycle as stability and paying down technical debt, and there are a
number of pieces of work started by myself and others that should
produce significant dividends in that area - state machine, cinder
agent, decoupling of drivers and connector types.

I have always encouraged anybody to reach out to me with questions and
concerns, and continue to do so. I look forward to continuing the
great work we've been doing.


Regards

-- 
Duncan Thomas
Cinder Core, and HP Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Candidacy

2014-09-23 Thread Tristan Cacqueray
confirmed

On 23/09/14 10:19 AM, Duncan Thomas wrote:
 This is an announcement of my standing for PTL of the Cinder project.
 
 I've been an active member of the Cinder core team from day one, and
 I've had the pleasure of working closely with John Griffith, the
 incumbent PTL, and a large and varied collection of contributors for
 that period.
 
 My job is running block storage in HP's public cloud, as well as
 working on our private cloud offering. This has give me a strong
 emphasis on the day to day operational aspects of running Cinder at
 scale, backward compatibility and the challenges of continuous
 deployment of head-of-tree code in production.
 
 I think Cinder is a project reaching technical maturity; it has a
 strong team behind it, both core and non-core. We work well together,
 and I see the main role of the PTL not as making decisions but in
 enabling this community to progress as smoothly as possible. I've had
 a great deal of success in driving forward the 3rd party CI
 requirements, working with the infra team to work out process and
 where necessary giving engineers the tools and leverage they need to
 overcome roadblocks within their own companies. I feel that some
 gentle shepherding in terms of review focus can help us increase our
 velocity without disrupting the very successful way of working we
 currently have.
 
 The hugely successful mid-cycle meetup set the main goal of the Kilo
 cycle as stability and paying down technical debt, and there are a
 number of pieces of work started by myself and others that should
 produce significant dividends in that area - state machine, cinder
 agent, decoupling of drivers and connector types.
 
 I have always encouraged anybody to reach out to me with questions and
 concerns, and continue to do so. I look forward to continuing the
 great work we've been doing.
 
 
 Regards
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-23 Thread Lance Bragstad
On Tue, Sep 23, 2014 at 3:51 AM, Thierry Carrez thie...@openstack.org
wrote:

 Adam Young wrote:
  OpenStack owes you more than most people realize.

 +1

 Dolph did a great job of keeping the fundamental piece that is Keystone
 safe from a release management perspective, by consistently hitting all
 the deadlines, giving time for other projects to safely build on it.


+1

Thank you for all you're dedication and hard work! It's easy to contribute
and become a
better developer when strong project leadership is in place to learn from.


  Don't you dare pull a Joe Heck and disappear on us now.

 :)

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Thierry Carrez
Sean Dague wrote:
 On 09/23/2014 02:56 AM, Thierry Carrez wrote:
 Hi everyone,

 Rather than waste a design summit session about it, I propose we review
 the proposed Kilo release cycle schedule on the mailing-list. See
 attached PDF for a picture of the proposal.

 The hard date the proposal is built around is the next (L) Design
 Summit week: May 18-22. That pushes back far into May (the farthest
 ever). However the M design summit being very likely to come back in
 October, the release date was set to April 23 to smooth the difference.

 That makes 3 full weeks between release and design summit (like in
 Hong-Kong), allowing for an official off-week on the week of May 4-8.
 
 Honestly, the off-week really isn't. If we're going to talk about
 throwing a stall week into the dev cycle for spacing, I'd honestly
 rather just push the release back, and be clear to folks that the summer
 cycle is going to be shorter. The off-week I think just causes us to
 lose momentum.

Sure. I'm not adding it for the beauty of it. See below.

 The rest of the proposal is mostly a no-brainer. Like always, we allow a
 longer time for milestone 2, to take into account the end-of-year
 holidays. That gives:

 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11
 
 The one thing that felt weird about the cadence of the 1st milestone in
 Havana last year was that it was super start / stop. Dec 11 means that
 we end up with 2 weeks until christmas, so many people are starting to
 wind down. My suggestion would be to push K1 to Dec 18, because I think
 you won't get much K2 content landed that week anyway.
 
 For US people at least this would change the Dec cadence from:
 
 * Holiday Week - Nov 28 is thanksgiving
 * Dev Week
 * Milestone Week
 * Dev Week
 * sort of Holiday Week
 * Holiday Week
 
 To:
 
 * Holiday Week
 * Dev Week
 * Dev Week
 * Milestone Week
 * sort of Holiday Week
 * Holiday Week
 
 Which I feel is going to get more done. If we take back the off week, we
 could just shift everything back a week, which makes K2 less split by
 christmas.

So we /could/ do:

Kilo Design Summit: Nov 4-7
Kilo-1 milestone: Dec 18
Kilo-2 milestone: Feb 5
Kilo-3 milestone, feature freeze: March 19
2015.1 (Kilo) release: Apr 30
L Design Summit: May 18-22

The main issue is that it would make a *very* short L cycle. Our best
bet for the M design summit is the week of October 26, so L release
would be Oct 8 or Oct 15. I placed Kilo release on April 23 on the
original plan to try to limit the impact.

If you think shorter is not an issue, I guess that's a valid option.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread John Griffith
Hey Everyone,

I've been kinda mixed on this one, but I think it's a good time for me to
not run for Cinder PTL.  I've been filling the role since we started the
idea back at the Folsom Summit, and it's been an absolute pleasure and
honor for me.

I don't plan on going anywhere and will still be involved as I am today,
but hopefully I'll also now have a good opportunity to contribute elsewhere
in OpenStack.  We have a couple of good candidates running for Cinder PTL
as well as a strong team backing the project so I think it's a good time to
let somebody else take the official PTL role for a bit.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Zane Bitter

On 23/09/14 08:58, Flavio Percoco wrote:

I believe the guarantee is still useful and it currently does not
represent an issue for the service nor the user. 2 things could happen
to FIFO in the future:

1. It's made optional and we allow users to opt-in in a per flavor
basis. (I personally don't like this one because it makes
interoperability even harder).


Hmm, I'm not so sure this is such a bad option. I criticised flavours 
earlier in this thread on the assumption that it meant every storage 
back-end would have its own semantics, and those would be surfaced to 
the user in the form of flavours - that does indeed make 
interoperability very hard.


The same issue does not arise for options implemented in Zaqar itself. 
If every back-end supports FIFO semantics but Zaqar has a layer that 
distributes the queue among multiple backends or not, depending on the 
flavour selected by the user, then there would be no impact on 
interoperability as the same semantics would be available regardless of 
the back-end chosen by the operator.



2. It's removed completely (Again, I personally don't like this one
because I don't think we have strong enough cases to require this to
happen).

That said, there's just 1 thing I think will happen for now, it'll be
kept as-is unless there are strong cases that'd require (1) or (2). All
this should be considered in the discussion of the API v2, whenever that
happens.


I think this is it ;)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Could we please use 1.4.1 for oslo.messaging *now*? (was: Oslo final releases ready)

2014-09-23 Thread Doug Hellmann

On Sep 23, 2014, at 4:35 AM, Thomas Goirand z...@debian.org wrote:

 On 09/18/2014 10:04 PM, Doug Hellmann wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.
 
 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)
 
 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!
 
 Doug
 
 Doug,
 
 Here in Debian, I have a *huge* mess with versionning with oslo.messaging.
 
 tl;dr: Because of that version number mess, please add a tag 1.4.1 to
 oslo.messaging now and use it everywhere instead of 1.4.0.

I’ve re-tagged 1.4.0 as 1.4.1 to give you a clean thing to package.

We shouldn’t need to update our requirements list in OpenStack, yet, because 
1.4.1 = 1.4.0, but you can use 1.4.1 for the Debian requirements.

Doug

 
 Longer version:
 
 What happened is that Chuck released a wrong version of Keystone (eg:
 the trunk rather than the stable branch). Therefore, I uploaded a
 version 1.4.0 beta version of olso.messaging in Debian Unstable/Jessie,
 because I thought the Icehouse version of Keystone needed it. (Sid /
 Jessie is supposed to keep Icehouse stuff only.)
 
 That would have been about fine, if only I didn't upgraded
 oslo.messaging to the last version in Sid, because I didn't want to keep
 a beta release in Jessie. Though this last version depends on
 oslo.config 1.4.0.0~a5, then probably even more.
 
 So I reverted the 1.4.0.0 upload in Debian Sid, by uploading version
 1.4.0.0+really+1.3.1, which as its name may suggest, really is a 1.3.1
 version (I did that to avoid having an EPOC and need to re-upload
 updates of all reverse dependencies of oslo.messaging). That's fine,
 we're covered for Sid/Jessie.
 
 But then, the Debian Experimental version of oslo.messaging is lower
 than the one in Sid/Jessie, so I have breakage there.
 
 If we declare a new 1.4.1, and have this fixed in our
 global-requirements.txt, then everything goes back in order for me, I
 get back on my feets. Otherwise, I'll have to deal with this, and make
 fake version numbers which will not match anything real released by
 OpenStack, which may lead to even more mistakes.
 
 So, could you please at least:
 - add a git tag 1.4.1 to oslo.messaging right now, matching 1.4.0
 
 This will make sure that nobody will use 1.4.1 again, and that I'm fine
 using this version number in Debian Experimental, which will be higher
 than the one in Sid.
 
 And then, optionally, it would help me if you could (but I can leave
 without it):
 - Use 1.4.1 for oslo.messaging in global-requirements.txt
 - Have every project that needs 1.4.0 bump to 1.4.1 as well
 
 This would be a lot less work than for me to declare an EPOC in the
 oslo.messaging package, and fix all reverse dependencies. The affected
 packages for Juno for me are:
 - ceilometer
 - cinder
 - designate
 - glance
 - heat
 - ironic
 - keystone
 - neutron
 - nova
 - oslo-config
 - oslo.rootwrap
 - oslo.i18n
 - python-pycadf
 
 I'd have to upload updates for all of them even if we use 1.4.1 instead
 of using an EPOC (eg: 1:1.4.0), but that's still much better for me to
 use 1.4.1 than an EPOC. EPOC are ugly (because not visible in file
 names) and confusing (it's easy to forget them), and non-reversible, so
 I'd like to avoid it if possible.
 
 I'm sorry for the mess and added work.
 Cheers,
 
 Thomas Goirand (zigo)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Doug Hellmann

On Sep 23, 2014, at 5:18 AM, Thierry Carrez thie...@openstack.org wrote:

 Devananda van der Veen wrote:
 On Mon, Sep 22, 2014 at 2:27 PM, Doug Hellmann d...@doughellmann.com wrote:
 On Sep 22, 2014, at 5:10 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 
 One of the primary effects of integration, as far as the release
 process is concerned, is being allowed to co-gate with other
 integrated projects, and having those projects accept your changes
 (integrate back with the other project). That shouldn't be a TC
 
 The point of integration is to add the projects to the integrated 
 *release*, not just the gate, because the release is the thing we have said 
 is OpenStack. Integration was about our overall project identity and 
 governance. The testing was a requirement to be accepted, not a goal.
 
 We have plenty of things which are clearly part of OpenStack, and yet
 which are not part of the Integrated Release. Oslo. Devstack. Zuul...
 As far as I can tell, the only time when integrated release equals
 the thing we say is OpenStack is when we're talking about the
 trademark.
 
 The main goal of incubation, as we did it in the past cycles, is a
 learning period where the new project aligns enough with the existing
 ones so that it integrates with them (Horizon shows Sahara dashboard)
 and won't break them around release time (stability, co-gate, respect of
 release deadlines).
 
 If we have a strict set of projects in layer #1, I don't see the point
 of keeping incubation. We wouldn't add new projects to layer #1 (only
 project splits which do not really require incubations), and additions
 to the big tent are considered on social alignment only (are you
 vaguely about cloud and do you follow the OpenStack way). If there is
 nothing to graduate to, there is no need for incubation.
 
 Integration was about our overall project identity and governance. The 
 testing was a requirement to be accepted, not a goal.
 
 Project identity and governance are presently addressed by the
 creation of Programs and a fully-elected TC.  Integration is not
 addressing these things at all, as far as I can tell, though I agree
 that it was initially intended to.
 
 If there is no incubation process, and only a fixed list of projects will 
 be in that new layer 1 group, then do contributors to the other projects 
 have ATC status and vote for the TC? What is the basis for the TC accepting 
 any responsibility for the project, and for the project agreeing to the 
 TC’s leadership?
 
 I think a good basis for this is simply whether the developers of the
 project are part of our community, doing things in the way that we do
 things, and want this to happen. Voting and ATC status is already
 decoupled [0] from the integrated gate and the integrated release --
 it's based on the accepted list of Programs [1], which actually has
 nothing to do with incubation or integration [2].
 
 In Monty's proposal, ATC status would be linked to contributions to the
 big tent. Projects apply to become part of it, subject themselves to the
 oversight of the Technical Committee, and get the right to elect TC
 members in return.

That’s the part that wasn’t clear. If we’re not “incubating” those projects, 
then what criteria do we use to make decisions about the applications? Is a 
declaration of intent enough?

Doug

 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Amir Sadoughi
Instead of moving the Kilo cycle a week later (April 23 - April 30), would it 
be possible to move the L design summit a week ahead (May 18-22 - May 
11-15). This would extend the L-cycle a week. Additionally, this would also 
help with return travel as the following Monday is Memorial Day; I'd expect 
return flights starting on Friday to be worse for the weekend of the 22nd, but 
I don't have data on that.

Amir Sadoughi

From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, September 23, 2014 9:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

Sean Dague wrote:
 On 09/23/2014 02:56 AM, Thierry Carrez wrote:
 Hi everyone,

 Rather than waste a design summit session about it, I propose we review
 the proposed Kilo release cycle schedule on the mailing-list. See
 attached PDF for a picture of the proposal.

 The hard date the proposal is built around is the next (L) Design
 Summit week: May 18-22. That pushes back far into May (the farthest
 ever). However the M design summit being very likely to come back in
 October, the release date was set to April 23 to smooth the difference.

 That makes 3 full weeks between release and design summit (like in
 Hong-Kong), allowing for an official off-week on the week of May 4-8.

 Honestly, the off-week really isn't. If we're going to talk about
 throwing a stall week into the dev cycle for spacing, I'd honestly
 rather just push the release back, and be clear to folks that the summer
 cycle is going to be shorter. The off-week I think just causes us to
 lose momentum.

Sure. I'm not adding it for the beauty of it. See below.

 The rest of the proposal is mostly a no-brainer. Like always, we allow a
 longer time for milestone 2, to take into account the end-of-year
 holidays. That gives:

 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11

 The one thing that felt weird about the cadence of the 1st milestone in
 Havana last year was that it was super start / stop. Dec 11 means that
 we end up with 2 weeks until christmas, so many people are starting to
 wind down. My suggestion would be to push K1 to Dec 18, because I think
 you won't get much K2 content landed that week anyway.

 For US people at least this would change the Dec cadence from:

 * Holiday Week - Nov 28 is thanksgiving
 * Dev Week
 * Milestone Week
 * Dev Week
 * sort of Holiday Week
 * Holiday Week

 To:

 * Holiday Week
 * Dev Week
 * Dev Week
 * Milestone Week
 * sort of Holiday Week
 * Holiday Week

 Which I feel is going to get more done. If we take back the off week, we
 could just shift everything back a week, which makes K2 less split by
 christmas.

So we /could/ do:

Kilo Design Summit: Nov 4-7
Kilo-1 milestone: Dec 18
Kilo-2 milestone: Feb 5
Kilo-3 milestone, feature freeze: March 19
2015.1 (Kilo) release: Apr 30
L Design Summit: May 18-22

The main issue is that it would make a *very* short L cycle. Our best
bet for the M design summit is the week of October 26, so L release
would be Oct 8 or Oct 15. I placed Kilo release on April 23 on the
original plan to try to limit the impact.

If you think shorter is not an issue, I guess that's a valid option.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Doug Hellmann

On Sep 22, 2014, at 8:05 PM, Devananda van der Veen devananda@gmail.com 
wrote:

 On Mon, Sep 22, 2014 at 2:27 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Sep 22, 2014, at 5:10 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 
 One of the primary effects of integration, as far as the release
 process is concerned, is being allowed to co-gate with other
 integrated projects, and having those projects accept your changes
 (integrate back with the other project). That shouldn't be a TC
 
 The point of integration is to add the projects to the integrated *release*, 
 not just the gate, because the release is the thing we have said is 
 OpenStack. Integration was about our overall project identity and 
 governance. The testing was a requirement to be accepted, not a goal.
 
 We have plenty of things which are clearly part of OpenStack, and yet
 which are not part of the Integrated Release. Oslo. Devstack. Zuul...
 As far as I can tell, the only time when integrated release equals
 the thing we say is OpenStack is when we're talking about the
 trademark.
 
 Integration was about our overall project identity and governance. The 
 testing was a requirement to be accepted, not a goal.
 
 Project identity and governance are presently addressed by the
 creation of Programs and a fully-elected TC.  Integration is not
 addressing these things at all, as far as I can tell, though I agree
 that it was initially intended to.

Good point: I’m mixing terms here. Programs and projects have tended to be 
incubated at the same time. We’ve insisted that it doesn’t make sense to have a 
program if there is nothing being produced, and that a project can’t be 
incubated if the program isn’t also incubated. The fact that we’ve also had 1:1 
coupling between programs and projects is unfortunate, but orthogonal to the 
fact that we have been evaluating the teams as well as the code.

 
 If there is no incubation process, and only a fixed list of projects will be 
 in that new layer 1 group, then do contributors to the other projects have 
 ATC status and vote for the TC? What is the basis for the TC accepting any 
 responsibility for the project, and for the project agreeing to the TC’s 
 leadership?
 
 I think a good basis for this is simply whether the developers of the
 project are part of our community, doing things in the way that we do
 things, and want this to happen. Voting and ATC status is already
 decoupled [0] from the integrated gate and the integrated release --
 it's based on the accepted list of Programs [1], which actually has
 nothing to do with incubation or integration [2].

I’m concerned that we’re combining changes to the way we decide what we include 
in the gate with changes to the way we decide which groups of people have a say 
in how the overall OpenStack project is run. One is a technical discussion that 
has nothing at all to do with governance. The other is entirely about 
governance.

If we are no longer incubating *programs*, which are the teams of people who we 
would like to ensure are involved in OpenStack governance, then how do we make 
that decision? From a practical standpoint, how do we make a list of eligible 
voters for a TC election? Today we pull a list of committers from the git 
history from the projects associated with “official programs, but if we are 
dropping “official programs” we need some other way to build the list.

Doug

 
 
 -Devananda
 
 [0] 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/charter.rst#n132
 
 [1] 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
 
 [2] 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/new-programs-requirements.rst
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Jeremy Stanley
On 2014-09-23 15:26:20 + (+), Amir Sadoughi wrote:
 Instead of moving the Kilo cycle a week later (April 23 - April
 30), would it be possible to move the L design summit a week
 ahead (May 18-22 - May 11-15). This would extend the L-cycle a
 week. Additionally, this would also help with return travel as the
 following Monday is Memorial Day; I'd expect return flights
 starting on Friday to be worse for the weekend of the 22nd, but I
 don't have data on that.

Summit dates tend to already be set in stone by the time we're
discussing release schedules. The foundation has to negotiate
contracts on facilities/hotels a year or more in advance.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] weekly meeting cancelled

2014-09-23 Thread Ruslan Kamaldinov
Since there are no major updates and some folks will not be able to
join Murano weekly meeting, we've decided to skip it for today.

Short update:
1. The only remaining major item left for juno is documentation and
everyone is pretty aware about items we need to have in our docs:
https://etherpad.openstack.org/p/murano-docs
2. We were not able to merge trusts support in Juno and had to
postpone it to Kilo
3. A few bugs need to be fixed before the release, most of them have
patches on review
4. Murano 3rd party CI wasn't quite stable for a few days due
relocation to another DC. Today it got back to normal state


Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Jay Pipes

On 09/22/2014 04:27 PM, Brant Knudson wrote:

On Fri, Sep 19, 2014 at 1:39 AM, Alex Xu x...@linux.vnet.ibm.com
mailto:x...@linux.vnet.ibm.com wrote:

Close to Kilo, it is time to think about what's next for nova API.
In Kilo, we
will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/__#/c/84695/19/specs/juno/v2-on-__v3-api.rst
https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for
micro-version.

I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/__2014/09/12/one-option-for-__nova-api/
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out in
the blog post.

As discussion in the Nova API meeting, we want to bring it up to
mail-list to
discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!

Thanks
Alex



Did you consider JSON Home[1] for this? For Juno we've got JSON Home
support in Keystone for Identity v3 (Zaqar was using it already). We
weren't planning to use it for microversioning since we weren't planning
on doing microversioning, but I think JSON Home could be used for this
purpose.

Using JSON Home, you'd have relationships that include the version, then
the client can check the JSON Home document to see if the server has
support for the relationship the client wants to use.

[1] http://tools.ietf.org/html/draft-nottingham-json-home-03


++ I used JSON-Home extensively in the Compute API blueprint I put 
together a few months ago:


http://docs.oscomputevnext.apiary.io/

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Jay Pipes

On 09/22/2014 05:29 AM, Kenichi Oomichi wrote:

-Original Message-
From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
Sent: Friday, September 19, 2014 3:40 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] Some ideas for micro-version implementation

Close to Kilo, it is time to think about what's next for nova API. In
Kilo, we
will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for
micro-version.

I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out in the
blog post.

As discussion in the Nova API meeting, we want to bring it up to
mail-list to
discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!


Before discussing how to implement, I'd like to consider what we should
implement. IIUC, the purpose of v3 API is to make consistent API with the
backwards incompatible changes. Through huge discussion in Juno cycle, we
knew that backwards incompatible changes of REST API would be huge pain
against clients and we should avoid such changes as possible.


Frankly, I believe the lack of perceived value in the v3 API was the 
reason backwards incompatibility was perceived as such a huge pain. v3 
wasn't adding any functionality in the API that wasn't in v2, and v3 was 
bringing along with it much of the crap from the v2 API ... for example, 
API extensions and the ludicrous and needless complexity they add to the 
API.


If the v3 API had given users real additions in functionality, ease of 
use, and clarity, I think users would have embraced a new direction more 
-- especially if support for the v2 API was kept alongside the newer API 
for some period of time. But the v3 API didn't offer anything that 
application and tooling developers cared about. It offered some cleanups 
in return codes, some cleanups in resource naming and parameters, and 
not much else.


 If new APIs

which are consistent in Nova API only are inconsistent for whole OpenStack
projects, maybe we need to change them again for whole OpenStack consistency.

For avoiding such situation, I think we need to define what is consistent
REST API across projects. According to Alex's blog, The topics might be

  - Input/Output attribute names
  - Resource names
  - Status code

The following are hints for making consistent APIs from Nova v3 API experience,
I'd like to know whether they are the best for API consistency.

(1) Input/Output attribute names
(1.1) These names should be snake_case.
   eg: imageRef - image_ref, flavorRef - flavor_ref, hostId - host_id


Sure, agreed.


(1.2) These names should contain extension names if they are provided in case 
of some extension loading.
   eg: security_groups - os-security-groups:security_groups
   config_drive - os-config-drive:config_drive
(1.3) Extension names should consist of hyphens and low chars.
   eg: OS-EXT-AZ:availability_zone - 
os-extended-availability-zone:availability_zone
   OS-EXT-STS:task_state - os-extended-status:task_state
(1.4) Extension names should contain the prefix os- if the extension is not 
core.
   eg: rxtx_factor - os-flavor-rxtx:rxtx_factor
   os-flavor-access:is_public - flavor-access:is_public (flavor-access 
extension became core)


Frankly, API extensions are a sore on the lip of OpenStack's smile.

They add needless complexity to the API and I believe a discoverable API 
with its resources micro-versioned with schema objects means that API 
extensions should just go the way of the Dodo.



(1.5) The length of the first attribute depth should be one.
   eg: create a server API with scheduler hints
 -- v2 API input attribute sample ---
   {
   server: {
   imageRef: e5468cc9-3e91-4449-8c4f-e4203c71e365,
   [..]
   },
   OS-SCH-HNT:scheduler_hints: {
   same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
   }
   }
 -- v3 API input attribute sample ---
   {
   server: {
   image_ref: e5468cc9-3e91-4449-8c4f-e4203c71e365,
   [..]
   os-scheduler-hints:scheduler_hints: {
   same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
   }
   }
   }


-1. What is the point of the containing server attribute in the outer 
dict? Why have it at all?


Each resource in the REST API should have the same structure: all 
attributes should be top-level, and a special _links attribute should 
be used for the collection of named hyperlinks using 

Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-23 Thread Yee, Guang
++ Amen, brother!



From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Tuesday, September 23, 2014 7:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Stepping down as PTL







On Tue, Sep 23, 2014 at 3:51 AM, Thierry Carrez thie...@openstack.org wrote:

Adam Young wrote:
 OpenStack owes you more than most people realize.

+1

Dolph did a great job of keeping the fundamental piece that is Keystone
safe from a release management perspective, by consistently hitting all
the deadlines, giving time for other projects to safely build on it.



+1



Thank you for all you're dedication and hard work! It's easy to contribute and 
become a

better developer when strong project leadership is in place to learn from.




 Don't you dare pull a Joe Heck and disappear on us now.

:)

--
Thierry Carrez (ttx)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Zane Bitter

On 22/09/14 22:04, Joe Gordon wrote:

To me this is less about valid or invalid choices. The Zaqar team is
comparing Zaqar to SQS, but after digging into the two of them, zaqar
barely looks like SQS. Zaqar doesn't guarantee what IMHO is the most
important parts of SQS: the message will be delivered and will never be
lost by SQS.


I agree that this is the most important feature. Happily, Flavio has 
clarified this in his other thread[1]:


 *Zaqar's vision is to provide a cross-cloud interoperable,
  fully-reliable messaging service at scale that is both, easy and not
  invasive, for deployers and users.*

  ...

  Zaqar aims to be a fully-reliable service, therefore messages should
  never be lost under any circumstances except for when the message's
  expiration time (ttl) is reached

So Zaqar _will_ guarantee reliable delivery.


Zaqar doesn't have the same scaling properties as SQS.


This is true. (That's not to say it won't scale, but it doesn't scale in 
exactly the same way that SQS does because it has a different architecture.)


It appears that the main reason for this is the ordering guarantee, 
which was introduced in response to feedback from users. So this is 
clearly a different design choice: SQS chose reliability plus 
effectively infinite scalability, while Zaqar chose reliability plus 
FIFO. It's not feasible to satisfy all three simultaneously, so the 
options are:


1) Implement two separate modes and allow the user to decide
2) Continue to choose FIFO over infinite scalability
3) Drop FIFO and choose infinite scalability instead

This is one of the key points on which we need to get buy-in from the 
community on selecting one of these as the long-term strategy.



Zaqar is aiming for low latency per message, SQS doesn't appear to be.


I've seen no evidence that Zaqar is actually aiming for that. There are 
waaay lower-latency ways to implement messaging if you don't care about 
durability (you wouldn't do store-and-forward, for a start). If you see 
a lot of talk about low latency, it's probably because for a long time 
people insisted on comparing Zaqar to RabbitMQ instead of SQS.


(Let's also be careful not to talk about high latency as if it were a 
virtue in itself; it's simply something we would happily trade off for 
other properties. Zaqar _is_ making that trade-off.)



So if Zaqar isn't SQS what is Zaqar and why should I use it?


If you are a small-to-medium user of an SQS-like service, Zaqar is like 
SQS but better because not only does it never lose your messages but 
they always arrive in order, and you have the option to fan them out to 
multiple subscribers. If you are a very large user along one particular 
dimension (I believe it's number of messages delivered from a single 
queue, but probably Gordon will correct me :D) then Zaqar may not _yet_ 
have a good story for you.


cheers,
Zane.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046809.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Duncan Thomas
On 22 September 2014 23:14, Robert Collins robe...@robertcollins.net wrote:
 I am not at all sure we've prevented other flowers blooming -
 and I hate the idea that we have done that.

I've certainly sat around at discussions which shut down hard with
somebody making the statement that 'that is TripleO's field and they
don't like X', it is a genuine if entirely unintentional problem.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread Duncan Thomas
On 23 September 2014 15:57, John Griffith john.griffi...@gmail.com wrote:
 Hey Everyone,

 I've been kinda mixed on this one, but I think it's a good time for me to
 not run for Cinder PTL.  I've been filling the role since we started the
 idea back at the Folsom Summit, and it's been an absolute pleasure and honor
 for me.


It's been a pleasure to work with you this far, and I hope to continue
doing so. You've seen Cinder through its birthing pains and into a
mature project, and done a great job doing so.

Cheers for all the hard work.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-23 Thread Doug Hellmann

On Sep 23, 2014, at 7:53 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Tue, Sep 23, 2014 at 07:42:30AM -0400, Sean Dague wrote:
 On 09/23/2014 02:56 AM, Thierry Carrez wrote:
 Hi everyone,
 
 Rather than waste a design summit session about it, I propose we review
 the proposed Kilo release cycle schedule on the mailing-list. See
 attached PDF for a picture of the proposal.
 
 The hard date the proposal is built around is the next (L) Design
 Summit week: May 18-22. That pushes back far into May (the farthest
 ever). However the M design summit being very likely to come back in
 October, the release date was set to April 23 to smooth the difference.
 
 That makes 3 full weeks between release and design summit (like in
 Hong-Kong), allowing for an official off-week on the week of May 4-8.
 
 Honestly, the off-week really isn't. If we're going to talk about
 throwing a stall week into the dev cycle for spacing, I'd honestly
 rather just push the release back, and be clear to folks that the summer
 cycle is going to be shorter. The off-week I think just causes us to
 lose momentum.
 
 I didn't really notice anyone stop working in the last off-week we
 had. Changes kept being submitted at normal rate, people kept talking
 on IRC, etc, etc. An off-week is a nice idea in theory but it didn't
 seem to have much effect in practice AFAICT.
 
 The rest of the proposal is mostly a no-brainer. Like always, we allow a
 longer time for milestone 2, to take into account the end-of-year
 holidays. That gives:
 
 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11
 
 The one thing that felt weird about the cadence of the 1st milestone in
 Havana last year was that it was super start / stop. Dec 11 means that
 we end up with 2 weeks until christmas, so many people are starting to
 wind down. My suggestion would be to push K1 to Dec 18, because I think
 you won't get much K2 content landed that week anyway.
 
 For US people at least this would change the Dec cadence from:
 
 * Holiday Week - Nov 28 is thanksgiving
 * Dev Week
 * Milestone Week
 * Dev Week
 * sort of Holiday Week
 * Holiday Week
 
 To:
 
 * Holiday Week
 * Dev Week
 * Dev Week
 * Milestone Week
 * sort of Holiday Week
 * Holiday Week
 
 Which I feel is going to get more done. If we take back the off week, we
 could just shift everything back a week, which makes K2 less split by
 christmas.
 
 Kilo-2 milestone: Jan 29
 Kilo-3 milestone, feature freeze: March 12
 2015.1 (Kilo) release: Apr 23
 L Design Summit: May 18-22
 
 I find it kind of wierd that we have 1 month gap between Kilo being
 released and the L design summit taking place. The design summits are
 supposed to be where we talk about  agree the big themes of the
 release, but we've already had 4+ weeks of working on the release by
 the time the summit takes place ?!?! Not to mention that we branched

I like the extra time, and there are lots of ways to make it productive: do the 
little cleanup tasks that are always put off; plan better for the summit 
sessions by preparing proof-of-concept code; fix more bugs; work on stable 
branches. I don’t think it’s necessary to wait until after the summit to start 
any work at all on the next release. We’re certainly not waiting in Oslo.

 even before that, so work actually takes place even further in advance
 of the design summit. I feel this is a contributing factor to the first
 milestone being comparatively less productive than the later milestones
 and causes big work to get pushed out later in the cycle. We really want
 to try and encourage work to happen earlier in the cycles to avoid the
 big crunch in m3. Is there a reason for this big gap between release
 and summit, or can we let Kilo go longer by 2-3 weeks, so we have better
 alignment between design summit happening  milestone1 dev cycle start ?
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-23 Thread Henry Nash
Agree with all the comments made - Dolph, you really did a great job as PTL - 
keeping the balanced view is a crucial part of the role.  Keystone is the 
better for it.

Henry
On 23 Sep 2014, at 17:08, Yee, Guang guang@hp.com wrote:

 ++ Amen, brother!
  
 From: Lance Bragstad [mailto:lbrags...@gmail.com] 
 Sent: Tuesday, September 23, 2014 7:52 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone] Stepping down as PTL
  
  
  
 On Tue, Sep 23, 2014 at 3:51 AM, Thierry Carrez thie...@openstack.org wrote:
 Adam Young wrote:
  OpenStack owes you more than most people realize.
 
 +1
 
 Dolph did a great job of keeping the fundamental piece that is Keystone
 safe from a release management perspective, by consistently hitting all
 the deadlines, giving time for other projects to safely build on it.
  
 +1  
  
 Thank you for all you're dedication and hard work! It's easy to contribute 
 and become a
 better developer when strong project leadership is in place to learn from.
  
 
  Don't you dare pull a Joe Heck and disappear on us now.
 
 :)
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Devananda van der Veen
On Tue, Sep 23, 2014 at 8:40 AM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 22, 2014, at 8:05 PM, Devananda van der Veen devananda@gmail.com 
 wrote:

 On Mon, Sep 22, 2014 at 2:27 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 22, 2014, at 5:10 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 One of the primary effects of integration, as far as the release
 process is concerned, is being allowed to co-gate with other
 integrated projects, and having those projects accept your changes
 (integrate back with the other project). That shouldn't be a TC

 The point of integration is to add the projects to the integrated 
 *release*, not just the gate, because the release is the thing we have said 
 is OpenStack. Integration was about our overall project identity and 
 governance. The testing was a requirement to be accepted, not a goal.

 We have plenty of things which are clearly part of OpenStack, and yet
 which are not part of the Integrated Release. Oslo. Devstack. Zuul...
 As far as I can tell, the only time when integrated release equals
 the thing we say is OpenStack is when we're talking about the
 trademark.

 Integration was about our overall project identity and governance. The 
 testing was a requirement to be accepted, not a goal.

 Project identity and governance are presently addressed by the
 creation of Programs and a fully-elected TC.  Integration is not
 addressing these things at all, as far as I can tell, though I agree
 that it was initially intended to.

 Good point: I’m mixing terms here. Programs and projects have tended to be 
 incubated at the same time. We’ve insisted that it doesn’t make sense to have 
 a program if there is nothing being produced, and that a project can’t be 
 incubated if the program isn’t also incubated. The fact that we’ve also had 
 1:1 coupling between programs and projects is unfortunate, but orthogonal to 
 the fact that we have been evaluating the teams as well as the code.


 If there is no incubation process, and only a fixed list of projects will 
 be in that new layer 1 group, then do contributors to the other projects 
 have ATC status and vote for the TC? What is the basis for the TC accepting 
 any responsibility for the project, and for the project agreeing to the 
 TC’s leadership?

 I think a good basis for this is simply whether the developers of the
 project are part of our community, doing things in the way that we do
 things, and want this to happen. Voting and ATC status is already
 decoupled [0] from the integrated gate and the integrated release --
 it's based on the accepted list of Programs [1], which actually has
 nothing to do with incubation or integration [2].

 I’m concerned that we’re combining changes to the way we decide what we 
 include in the gate with changes to the way we decide which groups of people 
 have a say in how the overall OpenStack project is run.

These things already weren't related.

 One is a technical discussion that has nothing at all to do with governance. 
 The other is entirely about governance.

 If we are no longer incubating *programs*, which are the teams of people who 
 we would like to ensure are involved in OpenStack governance, then how do we 
 make that decision? From a practical standpoint, how do we make a list of 
 eligible voters for a TC election? Today we pull a list of committers from 
 the git history from the projects associated with “official programs, but if 
 we are dropping “official programs” we need some other way to build the list.

I don't think incubation ever applied to programs. Any program listed
in 
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
is official and gets voting rights starting in the election after it
was added to that file.

I also don't think that Monty's proposal suggests that we drop
programs. I think it's suggesting the opposite -- we allow *more*
programs (and the projects associated with them) into the openstack/*
fold without requiring them to join the integrated gating of the
layer #1 projects.

-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Thierry Carrez
Doug Hellmann wrote:
 On Sep 23, 2014, at 5:18 AM, Thierry Carrez thie...@openstack.org wrote:
 
 Devananda van der Veen wrote:
 On Mon, Sep 22, 2014 at 2:27 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
 On Sep 22, 2014, at 5:10 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 One of the primary effects of integration, as far as the release
 process is concerned, is being allowed to co-gate with other
 integrated projects, and having those projects accept your changes
 (integrate back with the other project). That shouldn't be a TC

 The point of integration is to add the projects to the integrated 
 *release*, not just the gate, because the release is the thing we have 
 said is OpenStack. Integration was about our overall project identity and 
 governance. The testing was a requirement to be accepted, not a goal.

 We have plenty of things which are clearly part of OpenStack, and yet
 which are not part of the Integrated Release. Oslo. Devstack. Zuul...
 As far as I can tell, the only time when integrated release equals
 the thing we say is OpenStack is when we're talking about the
 trademark.

 The main goal of incubation, as we did it in the past cycles, is a
 learning period where the new project aligns enough with the existing
 ones so that it integrates with them (Horizon shows Sahara dashboard)
 and won't break them around release time (stability, co-gate, respect of
 release deadlines).

 If we have a strict set of projects in layer #1, I don't see the point
 of keeping incubation. We wouldn't add new projects to layer #1 (only
 project splits which do not really require incubations), and additions
 to the big tent are considered on social alignment only (are you
 vaguely about cloud and do you follow the OpenStack way). If there is
 nothing to graduate to, there is no need for incubation.

 Integration was about our overall project identity and governance. The 
 testing was a requirement to be accepted, not a goal.

 Project identity and governance are presently addressed by the
 creation of Programs and a fully-elected TC.  Integration is not
 addressing these things at all, as far as I can tell, though I agree
 that it was initially intended to.

 If there is no incubation process, and only a fixed list of projects will 
 be in that new layer 1 group, then do contributors to the other projects 
 have ATC status and vote for the TC? What is the basis for the TC 
 accepting any responsibility for the project, and for the project agreeing 
 to the TC’s leadership?

 I think a good basis for this is simply whether the developers of the
 project are part of our community, doing things in the way that we do
 things, and want this to happen. Voting and ATC status is already
 decoupled [0] from the integrated gate and the integrated release --
 it's based on the accepted list of Programs [1], which actually has
 nothing to do with incubation or integration [2].

 In Monty's proposal, ATC status would be linked to contributions to the
 big tent. Projects apply to become part of it, subject themselves to the
 oversight of the Technical Committee, and get the right to elect TC
 members in return.
 
 That’s the part that wasn’t clear. If we’re not “incubating” those projects, 
 then what criteria do we use to make decisions about the applications? Is a 
 declaration of intent enough?

In Monty's proposal, the big tent is pretty welcoming. The bar is are
you one of us:

 Some examples of community that we care about: being on stackforge rather 
 than github; having a PTL who you elect rather than a BDFL; having meetings 
 on IRC. Do any of the people who hack on the project also hack on any other 
 existing OpenStack projects, or are the people completely unconnected? is a 
 potential social touchstone as well.
 
 All in all, meeting the requirements for being one of us is not 
 particularly hard, nor should it be.

It's a community values assessment... I don't see what incubating
would give us there, apart from preserving red tape.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Federation and Policy

2014-09-23 Thread David Chadwick
Hi Doug

thanks. We will test this next week

regards

David

On 19/09/2014 18:43, Doug Hellmann wrote:
 
 On Sep 19, 2014, at 6:56 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 


 On 18/09/2014 22:14, Doug Hellmann wrote:

 On Sep 18, 2014, at 4:34 PM, David Chadwick d.w.chadw...@kent.ac.uk
 wrote:



 On 18/09/2014 21:04, Doug Hellmann wrote:

 On Sep 18, 2014, at 12:36 PM, David Chadwick 
 d.w.chadw...@kent.ac.uk wrote:

 Our recent work on federation suggests we need an improvement
 to the way the policy engine works. My understanding is that
 most functions are protected by the policy engine, but some are
 not. The latter functions are publicly accessible. But there is
 no way in the policy engine to specify public access to a
 function and there ought to be. This will allow an
 administrator to configure the policy for a function to range
 from very lax (publicly accessible) to very strict (admin
 only). A policy of  means that any authenticated user can
 access the function. But there is no way in the policy to
 specify that an unauthenticated user (i.e. public) has access
 to a function.

 We have already identified one function (get trusted IdPs 
 identity:list_identity_providers) that needs to be publicly 
 accessible in order for users to choose which IdP to use for 
 federated login. However some organisations may not wish to
 make this API call publicly accessible, whilst others may wish
 to restrict it to Horizon only etc. This indicates that that
 the policy needs to be set by the administrator, and not by
 changes to the code (i.e. to either call the policy engine or
 not, or to have two different API calls).

 I don’t know what list_identity_providers does.

 it lists the IDPs that Keystone trusts to authenticate users

 Can you give a little more detail about why some providers would
 want to make it not public

 I am not convinced that many cloud services will want to keep this
 list secret. Today if you do federated login, the public web page
 of the service provider typically lists the logos or names of its
 trusted IDPs (usually Facebook and Google). Also all academic
 federations publish their full lists of IdPs. But it has been
 suggested that some commercial cloud providers may not wish to
 publicise this list and instead require the end users to know which
 IDP they are going to use for federated login. In which case the
 list should not be public.


 if we plan to make it public by default? If we think there’s a 
 security issue, shouldn’t we just protect it?


 Its more a commercial in confidence issue (I dont want the world to
 know who I have agreements with) rather than a security issue,
 since the IDPs are typically already well known and already protect
 themselves against attacks from hackers on the Internet.

 OK. The weak “someone might want to” requirement aside, and again
 showing my ignorance of implementation details, do we truly have to
 add a new feature to disable the policy check? Is there no way to
 have an “always allow” policy using the current syntax?

 You tell me. If there is, then problem solved. If not, then my request
 still stands
 
From looking at the code, it appears that something like True:”True” (or 
possibly True:True) would always pass, but I haven’t tested that.
 
 Doug
 

 regards

 David


 Doug


 regards

 David


 If we can invent some policy syntax that indicates public
 access, e.g. reserved keyword of public, then Keystone can
 always call the policy file for every function and there would
 be no need to differentiate between protected APIs and
 non-protected APIs as all would be protected to a greater or
 lesser extent according to the administrator's policy.

 Comments please

 It seems reasonable to have a way to mark a function as fully
 public, if we expect to really have those kinds of functions.

 Doug


 regards

 David






 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Vishvananda Ishaya

On Sep 23, 2014, at 8:40 AM, Doug Hellmann d...@doughellmann.com wrote:

 If we are no longer incubating *programs*, which are the teams of people who 
 we would like to ensure are involved in OpenStack governance, then how do we 
 make that decision? From a practical standpoint, how do we make a list of 
 eligible voters for a TC election? Today we pull a list of committers from 
 the git history from the projects associated with “official programs, but if 
 we are dropping “official programs” we need some other way to build the list.

Joe Gordon mentioned an interesting idea to address this (which I am probably 
totally butchering), which is that we make incubation more similar to the ASF 
Incubator. In other words make it more lightweight with no promise of 
governance or infrastructure support.

It is also interesting to consider that we may not need much governance for 
things outside of layer1. Of course, this may be dancing around the actual 
problem to some extent, because there are a bunch of projects that are not 
layer1 that are already a part of the community, and we need a solution that 
includes them somehow.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Thierry Carrez
Devananda van der Veen wrote:
 On Tue, Sep 23, 2014 at 8:40 AM, Doug Hellmann d...@doughellmann.com wrote:
 One is a technical discussion that has nothing at all to do with governance. 
 The other is entirely about governance.

 If we are no longer incubating *programs*, which are the teams of people who 
 we would like to ensure are involved in OpenStack governance, then how do we 
 make that decision? From a practical standpoint, how do we make a list of 
 eligible voters for a TC election? Today we pull a list of committers from 
 the git history from the projects associated with “official programs, but 
 if we are dropping “official programs” we need some other way to build the 
 list.
 
 I don't think incubation ever applied to programs. Any program listed
 in 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
 is official and gets voting rights starting in the election after it
 was added to that file.

I confirm there never was incubation for programs. The only thing that
goes through incubation are projects that want to become part of the
integrated release.

 I also don't think that Monty's proposal suggests that we drop
 programs. I think it's suggesting the opposite -- we allow *more*
 programs (and the projects associated with them) into the openstack/*
 fold without requiring them to join the integrated gating of the
 layer #1 projects.

Although the proposal might make them so cheap we wouldn't need a formal
word for them.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-23 Thread Morgan Fainberg
Keystone team has released 0.11.1 of python-keystoneclient. Due to some delays 
getting things through the gate this took a few extra days.

https://pypi.python.org/pypi/python-keystoneclient/0.11.1

—Morgan 


—
Morgan Fainberg


-Original Message-
From: John Dickinson m...@not.mn
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 17, 2014 at 20:54:19
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [release] client release deadline - Sept 18th

 I just release python-swiftclient 2.3.0
  
 In addition to some smaller changes and bugfixes, the biggest changes are the 
 support  
 for Keystone v3 and a refactoring that allows for better testing and 
 extensibility of  
 the functionality exposed by the CLI.
  
 https://pypi.python.org/pypi/python-swiftclient/2.3.0
  
 --John
  
  
  
 On Sep 17, 2014, at 8:14 AM, Matt Riedemann wrote:
  
 
 
  On 9/15/2014 12:57 PM, Matt Riedemann wrote:
 
 
  On 9/10/2014 11:08 AM, Kyle Mestery wrote:
  On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
  wrote:
 
 
  On 9/9/2014 4:19 PM, Sean Dague wrote:
 
  As we try to stabilize OpenStack Juno, many server projects need to get
  out final client releases that expose new features of their servers.
  While this seems like not a big deal, each of these clients releases
  ends up having possibly destabilizing impacts on the OpenStack whole
  (as
  the clients do double duty in cross communicating between services).
 
  As such in the release meeting today it was agreed clients should have
  their final release by Sept 18th. We'll start applying the dependency
  freeze to oslo and clients shortly after that, all other requirements
  should be frozen at this point unless there is a high priority bug
  around them.
 
  -Sean
 
 
  Thanks for bringing this up. We do our own packaging and need time
  for legal
  clearances and having the final client releases done in a reasonable
  time
  before rc1 is helpful. I've been pinging a few projects to do a final
  client release relatively soon. python-neutronclient has a release this
  week and I think John was planning a python-cinderclient release this
  week
  also.
 
  Just a slight correction: python-neutronclient will have a final
  release once the L3 HA CLI changes land [1].
 
  Thanks,
  Kyle
 
  [1] https://review.openstack.org/#/c/108378/
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  python-cinderclient 1.1.0 was released on Saturday:
 
  https://pypi.python.org/pypi/python-cinderclient/1.1.0
 
 
  python-novaclient 2.19.0 was released yesterday [1].
 
  List of changes:
 
  mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 --oneline 
  --no-merges  
  cd56622 Stop using intersphinx
  d96f13d delete python bytecode before every test run
  4bd0c38 quota delete tenant_id parameter should be required
  3d68063 Don't display duplicated security groups
  2a1c07e Updated from global requirements
  319b61a Fix test mistake with requests-mock
  392148c Use oslo.utils
  e871bd2 Use Token fixtures from keystoneclient
  aa30c13 Update requirements.txt to include keystoneclient
  bcc009a Updated from global requirements
  f0beb29 Updated from global requirements
  cc4f3df Enhance network-list to allow --fields
  fe95fe4 Adding Nova Client support for auto find host APIv2
  b3da3eb Adding Nova Client support for auto find host APIv3
  3fa04e6 Add filtering by service to hosts list command
  c204613 Quickstart (README) doc should refer to nova
  9758ffc Updated from global requirements
  53be1f4 Fix listing of flavor-list (V1_1) to display swap value
  db6d678 Use adapter from keystoneclient
  3955440 Fix the return code of the command delete
  c55383f Fix variable error for nova --service-type
  caf9f79 Convert to requests-mock
  33058cb Enable several checks and do not check docs/source/conf.py
  abae04a Updated from global requirements
  68f357d Enable check for E131
  b6afd59 Add support for security-group-default-rules
  ad9a14a Fix rxtx_factor name for creating a flavor
  ff4af92 Allow selecting the network for doing the ssh with
  9ce03a9 fix host resource repr to use 'host' attribute
  4d25867 Enable H233
  60d1283 Don't log sensitive auth data
  d51b546 Enabled hacking checks H305 and H307
  8ec2a29 Edits on help strings
  c59a0c8 Add support for new fields in network create
  67585ab Add version-list for listing REST API versions
  0ff4afc Description is mandatory parameter when creating Security Group
  6ee0b28 Filter endpoints by region 

Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-23 Thread Dan Prince
On Fri, 2014-09-19 at 09:13 -0400, Sean Dague wrote:
 I've spent the better part of the last 2 weeks in the Nova bug tracker
 to try to turn it into something that doesn't cause people to run away
 screaming. I don't remember exactly where we started at open bug count 2
 weeks ago (it was north of 1400, with  200 bugs in new, but it might
 have been north of 1600), but as of this email we're at  1000 open bugs
 (I'm counting Fix Committed as closed, even though LP does not), and ~0
 new bugs (depending on the time of the day).
 
 == Philosophy in Triaging ==
 
 I'm going to lay out the philosophy of triaging I've had, because this
 may also set the tone going forward.
 
 A bug tracker is a tool to help us make a better release. It does not
 exist for it's own good, it exists to help. Which means when evaluating
 what stays in and what leaves we need to evaluate if any particular
 artifact will help us make a better release. But also more importantly
 realize that there is a cost for carrying every artifact in the tracker.
 Resolving duplicates gets non linearly harder as the number of artifacts
 go up. Triaging gets non-linearly hard as the number of artifacts go up.
 
 With this I was being somewhat pragmatic about closing bugs. An old bug
 that is just a stacktrace is typically not useful. An old bug that is a
 vague sentence that we should refactor a particular module (with no
 specifics on the details) is not useful. A bug reported against a very
 old version of OpenStack where the code has changed a lot in the
 relevant area, and there aren't responses from the author, is not
 useful. Not useful bugs just add debt, and we should get rid of them.
 That makes the chance of pulling a random bug off the tracker something
 that you could actually look at fixing, instead of mostly just stalling out.
 
 So I closed a lot of stuff as Invalid / Opinion that fell into those camps.
 
 == Keeping New Bugs at close to 0 ==
 
 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.
 
 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.
 
 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.
 
 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.
 
 == Our worse bug reporters are often core reviewers ==
 
 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).
 
 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.


This bug was filed as a result of a cryptic (to me at the time) gate
unit test failure that occurred in this review:

https://review.openstack.org/#/c/120099/

I mistakenly grabbed the last timeout error instead of looking at the
original timeout. Within 30 minutes or so of my post Matt Riedemann had
correctly classified it as https://bugs.launchpad.net/nova/+bug/1357578

I've added some extra data and marked it as a dup.

Dan


 
 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.
 
 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.
 
 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!
 
 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.
 
 == Tags ==
 
 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better functional areas.
 Here is the top of our current official tag list (and bug count):
 
 95 compute
 83 

[openstack-dev] [keystone] python-keystoneclient release 0.11.1

2014-09-23 Thread Morgan Fainberg
The Keystone team has released python-keystoneclient 0.11.1 [1]. This version 
is meant to be the release coinciding with the Juno release of OpenStack.

The release includes two fixes [2] on top of the 0.11.0 [3] version.


Cheers,
Morgan Fainberg

[1] https://pypi.python.org/pypi/python-keystoneclient/0.11.1
[2] https://launchpad.net/python-keystoneclient/+milestone/0.11.1
[3] https://launchpad.net/python-keystoneclient/+milestone/0.11.0

—
Morgan Fainberg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Kevin L. Mitchell
On Tue, 2014-09-23 at 12:09 -0400, Jay Pipes wrote:
 I'd like to say finally that I think there should be an OpenStack API 
 working group whose job it is to both pull together a set of OpenStack 
 API practices as well as evaluate new REST APIs proposed in the 
 OpenStack ecosystem to provide guidance to new projects or new 
 subprojects wishing to add resources to an existing REST API.

One of the things that's been bothering me about OpenStack for a while
now is the fact that we have all these different APIs on different
endpoints.  What I've been wondering about is if we should create a
unified ReST API service to replace the service from all of the
individual projects.  Then, users can just hit that one service to
handle all their different interactions.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-23 Thread Mark McClain

On Sep 19, 2014, at 9:13 AM, Sean Dague s...@dague.net wrote:

 Cross interaction with Neutron and Cinder remains racey. We are pretty
 optimistic on when resources will be available. Even the event interface
 with Neutron hasn't fully addressed this. I think a really great Design
 Summit session would be Nova + Neutron + Cinder to figure out a shared
 architecture to address this. I'd expect this to be at least a double
 session.

+1000

mark___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Andrew Laski


On 09/23/2014 01:15 PM, Kevin L. Mitchell wrote:

On Tue, 2014-09-23 at 12:09 -0400, Jay Pipes wrote:

I'd like to say finally that I think there should be an OpenStack API
working group whose job it is to both pull together a set of OpenStack
API practices as well as evaluate new REST APIs proposed in the
OpenStack ecosystem to provide guidance to new projects or new
subprojects wishing to add resources to an existing REST API.

One of the things that's been bothering me about OpenStack for a while
now is the fact that we have all these different APIs on different
endpoints.  What I've been wondering about is if we should create a
unified ReST API service to replace the service from all of the
individual projects.  Then, users can just hit that one service to
handle all their different interactions.


I've been thinking along very similar lines, but I don't think each 
current API needs to be replaced.  I would very much like to see a 
unified API project that could be responsible for managing requests to 
boot an instance with this network and that volume which would make 
requests to Nova/Neutron/Cinder on the users behalf.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Louis Taylor
On Tue, Sep 23, 2014 at 01:32:50PM -0400, Andrew Laski wrote:
 I've been thinking along very similar lines, but I don't think each current
 API needs to be replaced.  I would very much like to see a unified API
 project that could be responsible for managing requests to boot an instance
 with this network and that volume which would make requests to
 Nova/Neutron/Cinder on the users behalf.

Isn't this what openstacksdk [0] is?

[0] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread Mike Perez
On 08:57 Tue 23 Sep , John Griffith wrote:
 Hey Everyone,
 
 I've been kinda mixed on this one, but I think it's a good time for me to
 not run for Cinder PTL.  I've been filling the role since we started the
 idea back at the Folsom Summit, and it's been an absolute pleasure and
 honor for me.
 
 I don't plan on going anywhere and will still be involved as I am today,
 but hopefully I'll also now have a good opportunity to contribute elsewhere
 in OpenStack.  We have a couple of good candidates running for Cinder PTL
 as well as a strong team backing the project so I think it's a good time to
 let somebody else take the official PTL role for a bit.
 
 Thanks,
 John

Thanks John for initially making me feel welcomed in the Cinder team. I don't
think I would be where I am today if it wasn't for the encouragements you've
given me in the past. I also appreciate the foundation and original goal you've
set for Cinder for us all to continue to follow today.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] PTL Candidacy

2014-09-23 Thread Doug Hellmann
I am running for PTL for Oslo for the Kilo release cycle.

I have served 2 terms now, and my tl;dr platform for Kilo is, “More of the 
same!”

I have already posted the retrospective the team put together for Juno [1], so 
I won’t go over those items in depth here. From my perspective, the team is 
working well together and made excellent progress with our goals for Juno. 
We’ve ironed out a lot of the kinks in the graduation process, and with those 
adjustments I think Kilo will go just as smoothly as Juno has, if not more.

My first priority for us is to finish the work on the libraries we graduated in 
Juno, including adoption, removing incubated code, adding documentation, and 
any of the other tasks we identify that we need to do before we can say we are 
“done”. I would like to focus on this for K1.

We started oslo.log and oslo.concurrency late in the cycle, so we have more 
work to do there than for some of the other libraries. I really count those as 
Kilo graduations, even though we did get them started in Juno. I think we can 
finish these for K1 as well.

Dims has already started working on the analysis for which modules are ready to 
come out next, and we should finish that relatively soon to give us time to 
plan things out for the summit. My impression is we have 3-4 more libraries 
ready to move out of the incubator for K2-K3. At that point, I think we will 
have handled most of the code that is ready for graduation. We will need to 
look at anything that remains, to decide how to handle it for the L release 
cycle.

Graduation work was the focus of our attention for Juno, and I would give it a 
high priority during Kilo as well. However, we also need to bring bug triage 
and fixes back to the forefront, to make sure we take advantage of the new 
libraries to release fixes quickly to all of OpenStack, without waiting for 
projects to sync changes.

I hope these goals seem reasonable to everyone, and I look forward to working 
with all of you again this cycle.

Doug


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046757.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread Morgan Fainberg


-Original Message-
From: John Griffith john.griffi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 23, 2014 at 08:02:23
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [cinder] Not seeking another term as PTL

 Hey Everyone,
  
 I've been kinda mixed on this one, but I think it's a good time for me to
 not run for Cinder PTL. I've been filling the role since we started the
 idea back at the Folsom Summit, and it's been an absolute pleasure and
 honor for me.
  
 I don't plan on going anywhere and will still be involved as I am today,
 but hopefully I'll also now have a good opportunity to contribute elsewhere
 in OpenStack. We have a couple of good candidates running for Cinder PTL
 as well as a strong team backing the project so I think it's a good time to
 let somebody else take the official PTL role for a bit.
  
 Thanks,
 John

Thanks for leading Cinder for so long John! Cinder has come a long way and the 
project has prospered under your leadership. I look forward to seeing the 
project continue to grow. Just make sure you keep contributing and don’t 
disappear on us! :)

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Kevin L. Mitchell
On Tue, 2014-09-23 at 18:39 +0100, Louis Taylor wrote:
 On Tue, Sep 23, 2014 at 01:32:50PM -0400, Andrew Laski wrote:
  I've been thinking along very similar lines, but I don't think each current
  API needs to be replaced.  I would very much like to see a unified API
  project that could be responsible for managing requests to boot an instance
  with this network and that volume which would make requests to
  Nova/Neutron/Cinder on the users behalf.
 
 Isn't this what openstacksdk [0] is?
 
 [0] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

Well, openstacksdk is a client, and we're talking about a server.  A
server, in this instance, has some advantages over a client, including
making it easier to create that client in the first place :)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Sean Dague
On 09/23/2014 02:10 PM, Kevin L. Mitchell wrote:
 On Tue, 2014-09-23 at 18:39 +0100, Louis Taylor wrote:
 On Tue, Sep 23, 2014 at 01:32:50PM -0400, Andrew Laski wrote:
 I've been thinking along very similar lines, but I don't think each current
 API needs to be replaced.  I would very much like to see a unified API
 project that could be responsible for managing requests to boot an instance
 with this network and that volume which would make requests to
 Nova/Neutron/Cinder on the users behalf.

 Isn't this what openstacksdk [0] is?

 [0] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK
 
 Well, openstacksdk is a client, and we're talking about a server.  A
 server, in this instance, has some advantages over a client, including
 making it easier to create that client in the first place :)

So today we have a proxy API in Nova which does some of this. I guess
the question is is it better to do this in Nova in the future or divorce
it from there.

But regardless of approach, I don't think it really impacts whether or
not we need to solve a saner versioning mechanism than we currently
have... which is randomly add new extensions and pretend it's the same API.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL Candidacy

2014-09-23 Thread Tristan Cacqueray
confirmed

On 23/09/14 01:54 PM, Doug Hellmann wrote:
 I am running for PTL for Oslo for the Kilo release cycle.
 
 I have served 2 terms now, and my tl;dr platform for Kilo is, “More of the 
 same!”
 
 I have already posted the retrospective the team put together for Juno [1], 
 so I won’t go over those items in depth here. From my perspective, the team 
 is working well together and made excellent progress with our goals for Juno. 
 We’ve ironed out a lot of the kinks in the graduation process, and with those 
 adjustments I think Kilo will go just as smoothly as Juno has, if not more.
 
 My first priority for us is to finish the work on the libraries we graduated 
 in Juno, including adoption, removing incubated code, adding documentation, 
 and any of the other tasks we identify that we need to do before we can say 
 we are “done”. I would like to focus on this for K1.
 
 We started oslo.log and oslo.concurrency late in the cycle, so we have more 
 work to do there than for some of the other libraries. I really count those 
 as Kilo graduations, even though we did get them started in Juno. I think we 
 can finish these for K1 as well.
 
 Dims has already started working on the analysis for which modules are ready 
 to come out next, and we should finish that relatively soon to give us time 
 to plan things out for the summit. My impression is we have 3-4 more 
 libraries ready to move out of the incubator for K2-K3. At that point, I 
 think we will have handled most of the code that is ready for graduation. We 
 will need to look at anything that remains, to decide how to handle it for 
 the L release cycle.
 
 Graduation work was the focus of our attention for Juno, and I would give it 
 a high priority during Kilo as well. However, we also need to bring bug 
 triage and fixes back to the forefront, to make sure we take advantage of the 
 new libraries to release fixes quickly to all of OpenStack, without waiting 
 for projects to sync changes.
 
 I hope these goals seem reasonable to everyone, and I look forward to working 
 with all of you again this cycle.
 
 Doug
 
 
 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046757.html
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence: Backing up template instead of stack

2014-09-23 Thread Zane Bitter

On 23/09/14 09:44, Anant Patil wrote:

On 23-Sep-14 09:42, Clint Byrum wrote:

Excerpts from Angus Salkeld's message of 2014-09-22 20:15:43 -0700:

On Tue, Sep 23, 2014 at 1:09 AM, Anant Patil anant.pa...@hp.com wrote:


Hi,

One of the steps in the direction of convergence is to enable Heat
engine to handle concurrent stack operations. The main convergence spec
talks about it. Resource versioning would be needed to handle concurrent
stack operations.

As of now, while updating a stack, a backup stack is created with a new
ID and only one update runs at a time. If we keep the raw_template
linked to it's previous completed template, i.e. have a back up of
template instead of stack, we avoid having backup of stack.

Since there won't be a backup stack and only one stack_id to be dealt
with, resources and their versions can be queried for a stack with that
single ID. The idea is to identify resources for a stack by using stack
id and version. Please let me know your thoughts.



Hi Anant,

This seems more complex than it needs to be.

I could be wrong, but I thought the aim was to simply update the goal state.
The backup stack is just the last working stack. So if you update and there
is already an update you don't need to touch the backup stack.

Anyone else that was at the meetup want to fill us in?



The backup stack is a device used to collect items to operate on after
the current action is complete. It is entirely an implementation detail.

Resources that can be updated in place will have their resource record
superseded, but retain their physical resource ID.

This is one area where the resource plugin API is particularly sticky,
as resources are allowed to raise the replace me exception if in-place
updates fail. That is o-k though, at that point we will just comply by
creating a replacement resource as if we never tried the in-place update.

In order to facilitate this, we must expand the resource data model to
include a version. Replacement resources will be marked as current and
to-be-removed resources marked for deletion. We can also keep all current
- 1 resources around to facilitate rollback until the stack reaches a
complete state again. Once that is done, we can remove the backup stack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Backup stack is a good way to take care of rollbacks or cleanups after
the stack action is complete. By cleanup I mean the deletion of
resources that are no longer needed after the new update. It works very
well when one engine is processing the stack request and the stacks are
in memory.


It's actually a fairly terrible hack (I wrote it ;)

It doesn't work very well because in practice during an update there are 
dependencies that cross between the real stack and the backup stack (due 
to some resources remaining the same or being updated in place, while 
others are moved to the backup stack ready for replacement). So in the 
event of a failure that we don't completely roll back on the spot, we 
lose some dependency information.



As a step towards distributing the stack request processing and making
it fault-tolerant, we need to persist the dependency task graph. The
backup stack can also be persisted along with the new graph, but then
the engine has to traverse both the graphs to proceed with the operation
and later identify the resources to be cleaned-up or rolled back using
the stack id. There would be many resources for the same stack but
different stack ids.


Right, yeah this would be a mistake because in reality there is only one 
graph, so that's how we need to model it internally.



In contrast, when we store the current dependency task graph(from the
latest request) in DB, and version the resources, we can identify those
resources that need to be rolled-back or cleaned up after the stack
operations is done, by comparing their versions. With versioning of
resources and template, we can avoid creating a deep stack of backup
stacks. The processing of stack operation can happen from multiple
engines, and IMHO, it is simpler when all the engines just see one stack
and versions of resources, instead of seeing many stacks with many
resources for each stack.


Bingo.

I think all you need to do is record in the resource the particular 
template and set of parameters it was tied to (maybe just generate a 
UUID for each update... or perhaps a SHA hash of the actual data for 
better rollbacks?). Then any resource that isn't part of the latest 
template should get deleted during the cleanup phase of the dependency 
graph traversal.


As you mentioned above, we'll also need to store the dependency graph of 
the stack in the database somewhere. Right now we generate it afresh 
from the template by assuming that each resource name corresponds to one 
entry in the DB. Since that will no longer be true, we'll need it to be 
a graph of 

Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Chris Friesen

On 09/23/2014 12:19 PM, Sean Dague wrote:

On 09/23/2014 02:10 PM, Kevin L. Mitchell wrote:

On Tue, 2014-09-23 at 18:39 +0100, Louis Taylor wrote:

On Tue, Sep 23, 2014 at 01:32:50PM -0400, Andrew Laski wrote:

I've been thinking along very similar lines, but I don't think each current
API needs to be replaced.  I would very much like to see a unified API
project that could be responsible for managing requests to boot an instance
with this network and that volume which would make requests to
Nova/Neutron/Cinder on the users behalf.


Isn't this what openstacksdk [0] is?

[0] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK


Well, openstacksdk is a client, and we're talking about a server.  A
server, in this instance, has some advantages over a client, including
making it easier to create that client in the first place :)


So today we have a proxy API in Nova which does some of this. I guess
the question is is it better to do this in Nova in the future or divorce
it from there.

But regardless of approach, I don't think it really impacts whether or
not we need to solve a saner versioning mechanism than we currently
have... which is randomly add new extensions and pretend it's the same API.


I find the concept of an API version (i.e. a single version that 
applies to the whole API) to be not particularly useful.


Really what I care about is the API used to accomplish a specific task.

Why not do something like what is done for the userspace/kernel syscall 
API? Userspace code tries to use the most recent one it knows about, if 
that comes back as not a valid syscall then it tries the next older 
version.  As long as trying to use unsupported options fails cleanly, 
there is no ambiguity.


Realistically you'd want to have the complexity hidden behind a helper 
library, but this sort of thing would allow us to add new extensions 
without trying to enforce a version on the API as a whole.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] Questions about vmware-driver-ova-support blueprint

2014-09-23 Thread Tesshu M Flower
Hi, I suppose this question is directed at the VMware-API team, Gary, 
Tracy, Arnaud and Vui in particular?

I work at IBM in the Software group working with Dims (Davanum Srinivas) 
and Radu Mateescu and am particularly interested in ova support with 
VMware.  I've looked at the blueprint (
https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support) but 
would like to learn some details about how the implementation will work, 
and if this will lead to multi-disk support in the future, hopefully in 
the process learning how I can help out.

Can we include this topic in the weekly VMwareAPI IRC meeting?

Thanks in advance,

Tesshu Flower___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Joe Gordon
On Tue, Sep 23, 2014 at 9:50 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:


 On Sep 23, 2014, at 8:40 AM, Doug Hellmann d...@doughellmann.com wrote:

  If we are no longer incubating *programs*, which are the teams of people
 who we would like to ensure are involved in OpenStack governance, then how
 do we make that decision? From a practical standpoint, how do we make a
 list of eligible voters for a TC election? Today we pull a list of
 committers from the git history from the projects associated with “official
 programs, but if we are dropping “official programs” we need some other
 way to build the list.

 Joe Gordon mentioned an interesting idea to address this (which I am
 probably totally butchering), which is that we make incubation more similar
 to the ASF Incubator. In other words make it more lightweight with no
 promise of governance or infrastructure support.


you only slightly butchered it :). From what I gather the Apache Software
Foundation primary goals are to:


* provide a foundation for open, collaborative software development
projects by supplying hardware, communication, and business infrastructure
* create an independent legal entity to which companies and individuals can
donate resources and be assured that those resources will be used for the
public benefit
* provide a means for individual volunteers to be sheltered from legal
suits directed at the Foundation's projects
* protect the 'Apache' brand, as applied to its software products, from
being abused by other organizations
[0]

This roughly translates into: JIRA, SVN, Bugzilla and Confluence etc.
for infrastructure resources. So ASF provides infrastructure, legal
support, a trademark and some basic oversight.


The [Apache] incubator is responsible for:
* filtering the proposals about the creation of a new project or sub-project
* help the creation of the project and the infrastructure that it needs to
operate
* supervise and mentor the incubated community in order for them to reach
an open meritocratic environment
* evaluate the maturity of the incubated project, either promoting it to
official project/ sub-project status or by retiring it, in case of failure.

It must be noted that the incubator (just like the board) does not perform
filtering on the basis of technical issues. This is because the foundation
respects and suggests variety of technical approaches. It doesn't fear
innovation or even internal confrontation between projects which overlap in
functionality. [1]

So my idea, which is very similar to Monty's, is to make move all the
non-layer 1 projects into something closer to an ASF model where there is
still incubation and graduation. But the only things a project receives out
of this process is:

* Legal support
* A trademark
* Mentorship
* Infrastructure to use
* Basic oversight via the incubation/graduation process with respect to the
health of the community.

They do not get:

* Required co-gating or integration with any other projects
* People to right there docs for them, etc.
* Technical review/oversight
* Technical requirements
* Evaluation on how the project fits into a bigger picture
* Language requirements
* etc.

Note: this is just an idea, not a fully formed proposal

[0] http://www.apache.org/foundation/how-it-works.html#what
[1] http://www.apache.org/foundation/how-it-works.html#incubator



 It is also interesting to consider that we may not need much governance
 for things outside of layer1. Of course, this may be dancing around the
 actual problem to some extent, because there are a bunch of projects that
 are not layer1 that are already a part of the community, and we need a
 solution that includes them somehow.

 Vish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-23 Thread Allison Randal
On 09/23/2014 02:18 AM, Thierry Carrez wrote:
 The main goal of incubation, as we did it in the past cycles, is a
 learning period where the new project aligns enough with the existing
 ones so that it integrates with them (Horizon shows Sahara dashboard)
 and won't break them around release time (stability, co-gate, respect of
 release deadlines).
 
 If we have a strict set of projects in layer #1, I don't see the point
 of keeping incubation. We wouldn't add new projects to layer #1 (only
 project splits which do not really require incubations), and additions
 to the big tent are considered on social alignment only (are you
 vaguely about cloud and do you follow the OpenStack way). If there is
 nothing to graduate to, there is no need for incubation.

There's no need for incubation, as such, but it's worth taking the time
to think about the technical and social functions that incubation and
integration served (sometimes ineffectively, or only as side-effects),
and what will replace them. You've identified a few there:

- learning period for new projects
- alignment with existing projects
- stability (in which gating served as a weak crutch, and the real
answer will likely lie in more extensive cross-project communication,
also carefully filtered to avoid information overload)
- respect of release deadlines (which doesn't necessarily mean releasing
all at the same time, just being cognizant of network-effects of
releases, and the cadence of other projects in an up-or-down dependency
relationship with yours)

 In Monty's proposal, ATC status would be linked to contributions to the
 big tent. Projects apply to become part of it, subject themselves to the
 oversight of the Technical Committee, and get the right to elect TC
 members in return.

And, a few more here:

- transitioning from island to part of the big tent
- accepting oversight of TC
- accepting responsibility to participate in TC election

Allison

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence: Backing up template instead of stack

2014-09-23 Thread Joshua Harlow
I believe heat has its own dependency graph implementation but if that was 
switched to networkx[1] that library has a bunch of nice read/write 
capabilities.

See: https://github.com/networkx/networkx/tree/master/networkx/readwrite

And one made for sqlalchemy @ https://pypi.python.org/pypi/graph-alchemy/

Networkx has worked out pretty well for taskflow (and I believe mistral is also 
using it).

[1] https://networkx.github.io/

Something to think about...

On Sep 23, 2014, at 11:32 AM, Zane Bitter zbit...@redhat.com wrote:

 On 23/09/14 09:44, Anant Patil wrote:
 On 23-Sep-14 09:42, Clint Byrum wrote:
 Excerpts from Angus Salkeld's message of 2014-09-22 20:15:43 -0700:
 On Tue, Sep 23, 2014 at 1:09 AM, Anant Patil anant.pa...@hp.com wrote:
 
 Hi,
 
 One of the steps in the direction of convergence is to enable Heat
 engine to handle concurrent stack operations. The main convergence spec
 talks about it. Resource versioning would be needed to handle concurrent
 stack operations.
 
 As of now, while updating a stack, a backup stack is created with a new
 ID and only one update runs at a time. If we keep the raw_template
 linked to it's previous completed template, i.e. have a back up of
 template instead of stack, we avoid having backup of stack.
 
 Since there won't be a backup stack and only one stack_id to be dealt
 with, resources and their versions can be queried for a stack with that
 single ID. The idea is to identify resources for a stack by using stack
 id and version. Please let me know your thoughts.
 
 
 Hi Anant,
 
 This seems more complex than it needs to be.
 
 I could be wrong, but I thought the aim was to simply update the goal 
 state.
 The backup stack is just the last working stack. So if you update and there
 is already an update you don't need to touch the backup stack.
 
 Anyone else that was at the meetup want to fill us in?
 
 
 The backup stack is a device used to collect items to operate on after
 the current action is complete. It is entirely an implementation detail.
 
 Resources that can be updated in place will have their resource record
 superseded, but retain their physical resource ID.
 
 This is one area where the resource plugin API is particularly sticky,
 as resources are allowed to raise the replace me exception if in-place
 updates fail. That is o-k though, at that point we will just comply by
 creating a replacement resource as if we never tried the in-place update.
 
 In order to facilitate this, we must expand the resource data model to
 include a version. Replacement resources will be marked as current and
 to-be-removed resources marked for deletion. We can also keep all current
 - 1 resources around to facilitate rollback until the stack reaches a
 complete state again. Once that is done, we can remove the backup stack.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Backup stack is a good way to take care of rollbacks or cleanups after
 the stack action is complete. By cleanup I mean the deletion of
 resources that are no longer needed after the new update. It works very
 well when one engine is processing the stack request and the stacks are
 in memory.
 
 It's actually a fairly terrible hack (I wrote it ;)
 
 It doesn't work very well because in practice during an update there are 
 dependencies that cross between the real stack and the backup stack (due to 
 some resources remaining the same or being updated in place, while others are 
 moved to the backup stack ready for replacement). So in the event of a 
 failure that we don't completely roll back on the spot, we lose some 
 dependency information.
 
 As a step towards distributing the stack request processing and making
 it fault-tolerant, we need to persist the dependency task graph. The
 backup stack can also be persisted along with the new graph, but then
 the engine has to traverse both the graphs to proceed with the operation
 and later identify the resources to be cleaned-up or rolled back using
 the stack id. There would be many resources for the same stack but
 different stack ids.
 
 Right, yeah this would be a mistake because in reality there is only one 
 graph, so that's how we need to model it internally.
 
 In contrast, when we store the current dependency task graph(from the
 latest request) in DB, and version the resources, we can identify those
 resources that need to be rolled-back or cleaned up after the stack
 operations is done, by comparing their versions. With versioning of
 resources and template, we can avoid creating a deep stack of backup
 stacks. The processing of stack operation can happen from multiple
 engines, and IMHO, it is simpler when all the engines just see one stack
 and versions of resources, instead of seeing many stacks with many
 resources for each stack.
 
 Bingo.
 
 I think all you need to do is record in the resource the 

Re: [openstack-dev] pycharm license?

2014-09-23 Thread Andrew Melton
Hi Devs,

I have the new license, but it has some new restrictions on it's use. I am 
still waiting on some clarification, but I do know some situations in which I 
can distribute the license.

I cannot distribute the license to any commercial developers. This means if as 
part of your job, you are contributing to OpenStack, and if the company you 
work for provides paid services, support, or training relating to OpenStack, I 
cannot provide you the license.

If you meet those criteria (I know I do), you are now required to have a 
commercial license. I'm sure quite a few of us now meet this criteria and are 
without a license. Another alternative is the free Community Edition.

If you do not meet those criteria, I can provide you with the license. To 
request the license, please send me an email with your name, the company you 
work for (if applicable), and your launchpad id. If you are unsure of your 
situation, it may be best to hold off until I hear back from Jetbrains.

Lastly, as part of Jetbrains granting us this new license, they have asked if 
anyone would be willing to write a review. If anyone would like to do that, 
please let me know.

Thanks,
Andrew Melton

From: Andrew Melton [andrew.mel...@rackspace.com]
Sent: Monday, September 22, 2014 3:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] pycharm license?

Hi Devs,

I'm working on the new license, but it is taking longer than it normally does 
as Jetbrains is requiring some new steps to get the license. I'll send out an 
update when I have it, but until then we'll just have to deal with the pop-ups 
on start. If I'm remembering correctly, a new license simple grants access to 
newer versions, current versions should still work.

Thanks for your patience,
Andrew Melton

From: Manickam, Kanagaraj [kanagaraj.manic...@hp.com]
Sent: Sunday, September 21, 2014 11:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] pycharm license?

Hi,

Does anyone has pycharm license for openstack project development? Thanks.

Regards
Kanagaraj M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread Walter A. Boring IV

John,
  Thanks for the term as PTL since Cinder got it's start.  Without your 
encouragement when Kurt and I started working on Cinder back during 
Grizzly.  We wouldn't have been successful, and might not even be 
working on the project to this day.   I can't say enough about how 
helpful you have been to our team.   I have really enjoyed working with 
you the last few years and hope to continue doing so!


Walt

Hey Everyone,

I've been kinda mixed on this one, but I think it's a good time for me 
to not run for Cinder PTL.  I've been filling the role since we 
started the idea back at the Folsom Summit, and it's been an absolute 
pleasure and honor for me.


I don't plan on going anywhere and will still be involved as I am 
today, but hopefully I'll also now have a good opportunity to 
contribute elsewhere in OpenStack.  We have a couple of good 
candidates running for Cinder PTL as well as a strong team backing the 
project so I think it's a good time to let somebody else take the 
official PTL role for a bit.


Thanks,
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Fox, Kevin M
Flavio wrote The reasoning, as explained in an other
email, is that from a use-case perspective, strict ordering won't hurt
you if you don't need it whereas having to implement it in the client
side because the service doesn't provide it can be a PITA.

The reasoning is flawed though. If performance is a concern, having strict 
ordering costs you when you may not care!

For example, is it better to implement a video streaming service on tcp or udp 
if firewalls aren't a concern? The latter. Why? Because ordering is a problem 
for these systems! If you have frames, 1 2 and 3..., and frame 2 gets lost on 
the first transmit and needs resending, but 3 gets there, the system has to 
wait to display frame 3 waiting for frame 2. But by the time frame 2 gets 
there, frame 3 doesn't matter because the system needs to move on to frame 5 
now. The human eye doens't care to wait for retransmits of frames. it only 
cares about the now. So because of the ordering, the eye sees 3 dropped frames 
instead of just one. making the system worse, not better.

Yeah, I know its a bit of a silly example. No one would implement video 
streaming on top of messaging like that. But it does present the point that 
something that seemingly only provides good things (order is always better then 
disorder, right?), sometimes has unintended and negative side affects. In 
lossless systems, it can show up as unnecessary latency or higher cpu loads.

I think your option 1 will make Zaqar much more palatable to those that don't 
need the strict ordering requirement.

I'm glad you want to make hard things like guaranteed ordering available so 
that users don't have to deal with it themselves if they don't want to. Its a 
great feature. But it also is an anti-feature in some cases. The ramifications 
of its requirements are higher then you think, and a feature to just disable it 
shouldn't be very costly to implement.

Part of the controversy right now, I think, has been not understanding the use 
case here, and by insisting that FIFO only ever is positive, it makes others 
that know its negatives question what other assumptions were made in Zaqar and 
makes them a little gun shy.

Please do reconsider this stance.

Thanks,
Kevin



From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, September 23, 2014 5:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed 
Queues

On 09/23/2014 10:58 AM, Gordon Sim wrote:
 On 09/22/2014 05:58 PM, Zane Bitter wrote:
 On 22/09/14 10:11, Gordon Sim wrote:
 As I understand it, pools don't help scaling a given queue since all the
 messages for that queue must be in the same pool. At present traffic
 through different Zaqar queues are essentially entirely orthogonal
 streams. Pooling can help scale the number of such orthogonal streams,
 but to be honest, that's the easier part of the problem.

 But I think it's also the important part of the problem. When I talk
 about scaling, I mean 1 million clients sending 10 messages per second
 each, not 10 clients sending 1 million messages per second each.

 I wasn't really talking about high throughput per producer (which I
 agree is not going to be a good fit), but about e.g. a large number of
 subscribers for the same set of messages, e.g. publishing one message
 per second to 10,000 subscribers.

 Even at much smaller scale, expanding from 10 subscribers to say 100
 seems relatively modest but the subscriber related load would increase
 by a factor of 10. I think handling these sorts of changes is also an
 important part of the problem (though perhaps not a part that Zaqar is
 focused on).

 When a user gets to the point that individual queues have massive
 throughput, it's unlikely that a one-size-fits-all cloud offering like
 Zaqar or SQS is _ever_ going to meet their needs. Those users will want
 to spin up and configure their own messaging systems on Nova servers,
 and at that kind of size they'll be able to afford to. (In fact, they
 may not be able to afford _not_ to, assuming per-message-based pricing.)

 [...]
 If scaling the number of communicants on a given communication channel
 is a goal however, then strict ordering may hamper that. If it does, it
 seems to me that this is not just a policy tweak on the underlying
 datastore to choose the desired balance between ordering and scale, but
 a more fundamental question on the internal structure of the queue
 implementation built on top of the datastore.

 I agree with your analysis, but I don't think this should be a goal.

 I think it's worth clarifying that alongside the goals since scaling can
 mean different things to different people. The implication then is that
 there is some limit in the number of producers and/or consumers on a
 queue beyond which the service won't scale and applications need to
 design around that.

Agreed. The above is not part of Zaqar's goals. That is to say that each

Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Dean Troyer
On Tue, Sep 23, 2014 at 1:19 PM, Sean Dague s...@dague.net wrote:

 So today we have a proxy API in Nova which does some of this. I guess
 the question is is it better to do this in Nova in the future or divorce
 it from there.


A stand-alone, production-quality configurable proxy would be useful to
take over this work.  It would also be useful for some other things I have
in mind...

For a while now I've wanted to find or write an API proxy/simulator to a)
do API mock-like configurations to actually use and test it over the
network; b) proxy requests on to real servers, with the option of
transforming both request and response for further 'live' API change
mockups.

I'm sure this has already been done, suggestions and pointers to existing
projects welcome.  If not, suggestions for a good starting point/platform.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: Oslo team is cleaning up the incubator

2014-09-23 Thread Doug Hellmann

On Sep 19, 2014, at 1:16 PM, Doug Hellmann d...@doughellmann.com wrote:

 The Oslo team is starting to remove deprecated code from the incubator for 
 libraries that graduated this cycle, in preparation for the work we will be 
 doing in Kilo.
 
 Any fixes for those modules needed before the final releases for the other 
 projects should be submitted to the stable/juno branch of the incubator, 
 rather than master. We will provide guidance for each patch about whether it 
 should also be submitted to master and the new library.

Some of the cleanup changes to the incubator are starting to land, so I have 
gone through the open reviews for oslo-incubator and abandoned any that looked 
like they were obviously related to code that has been removed.

In each case, I tried to indicate the conditions under which it might make 
sense to resubmit the patch elsewhere. In several cases that isn’t needed 
because we already have the patch in the new library.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-23 Thread Doug Hellmann

On Sep 23, 2014, at 1:24 PM, Mark McClain m...@mcclain.xyz wrote:

 
 On Sep 19, 2014, at 9:13 AM, Sean Dague s...@dague.net wrote:
 
 Cross interaction with Neutron and Cinder remains racey. We are pretty
 optimistic on when resources will be available. Even the event interface
 with Neutron hasn't fully addressed this. I think a really great Design
 Summit session would be Nova + Neutron + Cinder to figure out a shared
 architecture to address this. I'd expect this to be at least a double
 session.
 
 +1000
 
 mark

It’s possible we could add some features to tooz (a new Oslo synchronization 
library) to help with this. Julien is the lead on tooz.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Dean Troyer
On Tue, Sep 23, 2014 at 1:40 PM, Chris Friesen

 Why not do something like what is done for the userspace/kernel syscall
 API? Userspace code tries to use the most recent one it knows about, if
 that comes back as not a valid syscall then it tries the next older
 version.  As long as trying to use unsupported options fails cleanly, there
 is no ambiguity.


[Since Jamie isn't around to I'll brag about his stuff instead]

This work has been going on, slowly, and is in place for the current
Keystone client lib when using the new auth plugins.  IIRC, for Identity
only now, other APIs coming.

Part of the fun is the diverse ways versions are reported by each project.
There have been summit sessions and conversations and we are basically
still here so the focus is on using what we have since we'll need to
support it for quite some time yet.

JSON Home seems flexible enough to be able to take on this role, I hope I
am reading the RFC correctly.  I think the API docs mentioned earlier in
this thread is an appropriate place to hang these suggestions also.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-23 Thread Eoghan Glynn

The ceilometer team released python-ceilometerclient vesion 1.0.11 yesterday:

  https://pypi.python.org/pypi/python-ceilometerclient/1.0.11

Cheers,
Eoghan

 Keystone team has released 0.11.1 of python-keystoneclient. Due to some
 delays getting things through the gate this took a few extra days.
 
 https://pypi.python.org/pypi/python-keystoneclient/0.11.1
 
 —Morgan
 
 
 —
 Morgan Fainberg
 
 
 -Original Message-
 From: John Dickinson m...@not.mn
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: September 17, 2014 at 20:54:19
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [release] client release deadline - Sept 18th
 
  I just release python-swiftclient 2.3.0
   
  In addition to some smaller changes and bugfixes, the biggest changes are
  the support
  for Keystone v3 and a refactoring that allows for better testing and
  extensibility of
  the functionality exposed by the CLI.
   
  https://pypi.python.org/pypi/python-swiftclient/2.3.0
   
  --John
   
   
   
  On Sep 17, 2014, at 8:14 AM, Matt Riedemann wrote:
   
  
  
   On 9/15/2014 12:57 PM, Matt Riedemann wrote:
  
  
   On 9/10/2014 11:08 AM, Kyle Mestery wrote:
   On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
   wrote:
  
  
   On 9/9/2014 4:19 PM, Sean Dague wrote:
  
   As we try to stabilize OpenStack Juno, many server projects need to
   get
   out final client releases that expose new features of their servers.
   While this seems like not a big deal, each of these clients releases
   ends up having possibly destabilizing impacts on the OpenStack whole
   (as
   the clients do double duty in cross communicating between services).
  
   As such in the release meeting today it was agreed clients should
   have
   their final release by Sept 18th. We'll start applying the dependency
   freeze to oslo and clients shortly after that, all other requirements
   should be frozen at this point unless there is a high priority bug
   around them.
  
   -Sean
  
  
   Thanks for bringing this up. We do our own packaging and need time
   for legal
   clearances and having the final client releases done in a reasonable
   time
   before rc1 is helpful. I've been pinging a few projects to do a final
   client release relatively soon. python-neutronclient has a release
   this
   week and I think John was planning a python-cinderclient release this
   week
   also.
  
   Just a slight correction: python-neutronclient will have a final
   release once the L3 HA CLI changes land [1].
  
   Thanks,
   Kyle
  
   [1] https://review.openstack.org/#/c/108378/
  
   --
  
   Thanks,
  
   Matt Riedemann
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   python-cinderclient 1.1.0 was released on Saturday:
  
   https://pypi.python.org/pypi/python-cinderclient/1.1.0
  
  
   python-novaclient 2.19.0 was released yesterday [1].
  
   List of changes:
  
   mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 --oneline
   --no-merges
   cd56622 Stop using intersphinx
   d96f13d delete python bytecode before every test run
   4bd0c38 quota delete tenant_id parameter should be required
   3d68063 Don't display duplicated security groups
   2a1c07e Updated from global requirements
   319b61a Fix test mistake with requests-mock
   392148c Use oslo.utils
   e871bd2 Use Token fixtures from keystoneclient
   aa30c13 Update requirements.txt to include keystoneclient
   bcc009a Updated from global requirements
   f0beb29 Updated from global requirements
   cc4f3df Enhance network-list to allow --fields
   fe95fe4 Adding Nova Client support for auto find host APIv2
   b3da3eb Adding Nova Client support for auto find host APIv3
   3fa04e6 Add filtering by service to hosts list command
   c204613 Quickstart (README) doc should refer to nova
   9758ffc Updated from global requirements
   53be1f4 Fix listing of flavor-list (V1_1) to display swap value
   db6d678 Use adapter from keystoneclient
   3955440 Fix the return code of the command delete
   c55383f Fix variable error for nova --service-type
   caf9f79 Convert to requests-mock
   33058cb Enable several checks and do not check docs/source/conf.py
   abae04a Updated from global requirements
   68f357d Enable check for E131
   b6afd59 Add support for security-group-default-rules
   ad9a14a Fix rxtx_factor name for creating a flavor
   ff4af92 Allow selecting the network for doing the ssh with
   9ce03a9 fix host resource repr to use 'host' attribute
   4d25867 Enable H233
   60d1283 Don't log sensitive auth data
   

Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-23 Thread Devananda van der Veen
On Mon, Sep 22, 2014 at 7:51 AM, Jay S. Bryant
jsbry...@electronicjungle.net wrote:

 On 09/21/2014 07:37 PM, Matt Riedemann wrote:

 When I'm essentially +2 on a change but for a small issue like typos in
 the commit message, the need for a note in the code or a test (or change to
 a test), I've been doing those myself lately and then will give the +2.  If
 the change already has a +2 and I'd be +W but for said things, I'm more
 inclined lately to approve and then push a dependent patch on top of it with
 the changes to keep things from stalling.

 This might be a change in my workflow just because we're late in the
 release and want good bug fixes getting into the release candidates, it
 could be because of the weekly tirade of how the project is going down the
 toilet and we don't get enough things reviewed/approved, I'm not sure, but
 my point is I agree with making it socially acceptable to rewrite the commit
 message as part of the review.

 Matt,

 This is consistent with what I have been doing for Cinder as well. I know
 there are some people who prefer I not touch the commit messages and I
 respect those requests, but otherwise I make changes to keep the process
 moving.

 Jay

This is also consistent with how I've been doing things in Ironic and
I have been encouraging the core team to use their judgement when
doing this as well -- especially when it's a patch from someone we
know won't get back to it for a while (eg, because they're on
vacation) or someone that has already OK'd this workflow (eg, other
members of the core team, and regular developers we know).

-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-23 Thread Flavio Percoco
On 09/23/2014 09:29 PM, Fox, Kevin M wrote:
 Flavio wrote The reasoning, as explained in an other
 email, is that from a use-case perspective, strict ordering won't hurt
 you if you don't need it whereas having to implement it in the client
 side because the service doesn't provide it can be a PITA.
 
 The reasoning is flawed though. If performance is a concern, having strict 
 ordering costs you when you may not care!
 
 For example, is it better to implement a video streaming service on tcp or 
 udp if firewalls aren't a concern? The latter. Why? Because ordering is a 
 problem for these systems! If you have frames, 1 2 and 3..., and frame 2 gets 
 lost on the first transmit and needs resending, but 3 gets there, the system 
 has to wait to display frame 3 waiting for frame 2. But by the time frame 2 
 gets there, frame 3 doesn't matter because the system needs to move on to 
 frame 5 now. The human eye doens't care to wait for retransmits of frames. it 
 only cares about the now. So because of the ordering, the eye sees 3 dropped 
 frames instead of just one. making the system worse, not better.
 
 Yeah, I know its a bit of a silly example. No one would implement video 
 streaming on top of messaging like that. But it does present the point that 
 something that seemingly only provides good things (order is always better 
 then disorder, right?), sometimes has unintended and negative side affects. 
 In lossless systems, it can show up as unnecessary latency or higher cpu 
 loads.
 
 I think your option 1 will make Zaqar much more palatable to those that don't 
 need the strict ordering requirement.
 
 I'm glad you want to make hard things like guaranteed ordering available so 
 that users don't have to deal with it themselves if they don't want to. Its a 
 great feature. But it also is an anti-feature in some cases. The 
 ramifications of its requirements are higher then you think, and a feature to 
 just disable it shouldn't be very costly to implement.
 
 Part of the controversy right now, I think, has been not understanding the 
 use case here, and by insisting that FIFO only ever is positive, it makes 
 others that know its negatives question what other assumptions were made in 
 Zaqar and makes them a little gun shy.
 
 Please do reconsider this stance.


Hey Kevin,

FWIW, I explicitly said from a use-case perspective which in the
context of the emails I was replying to referred to the need (or not)
for FIFO and not to the impact it has in other areas like performance.

In any way I tried to insist that FIFO is only ever positive and I've
also explicitly said in several other emails that it *does* have an
impact on performance.

That said, I agree that if FIFO's reality in Zaqar changes, it'll likely
be towards the option (1).

Thanks for your feedback,
Flavio

 
 Thanks,
 Kevin
 
 
 
 From: Flavio Percoco [fla...@redhat.com]
 Sent: Tuesday, September 23, 2014 5:58 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed 
 Queues
 
 On 09/23/2014 10:58 AM, Gordon Sim wrote:
 On 09/22/2014 05:58 PM, Zane Bitter wrote:
 On 22/09/14 10:11, Gordon Sim wrote:
 As I understand it, pools don't help scaling a given queue since all the
 messages for that queue must be in the same pool. At present traffic
 through different Zaqar queues are essentially entirely orthogonal
 streams. Pooling can help scale the number of such orthogonal streams,
 but to be honest, that's the easier part of the problem.

 But I think it's also the important part of the problem. When I talk
 about scaling, I mean 1 million clients sending 10 messages per second
 each, not 10 clients sending 1 million messages per second each.

 I wasn't really talking about high throughput per producer (which I
 agree is not going to be a good fit), but about e.g. a large number of
 subscribers for the same set of messages, e.g. publishing one message
 per second to 10,000 subscribers.

 Even at much smaller scale, expanding from 10 subscribers to say 100
 seems relatively modest but the subscriber related load would increase
 by a factor of 10. I think handling these sorts of changes is also an
 important part of the problem (though perhaps not a part that Zaqar is
 focused on).

 When a user gets to the point that individual queues have massive
 throughput, it's unlikely that a one-size-fits-all cloud offering like
 Zaqar or SQS is _ever_ going to meet their needs. Those users will want
 to spin up and configure their own messaging systems on Nova servers,
 and at that kind of size they'll be able to afford to. (In fact, they
 may not be able to afford _not_ to, assuming per-message-based pricing.)

 [...]
 If scaling the number of communicants on a given communication channel
 is a goal however, then strict ordering may hamper that. If it does, it
 seems to me that this is not just a policy tweak on the underlying
 datastore to choose the 

Re: [openstack-dev] [Glance] PTL Non-Candidacy

2014-09-23 Thread Iccha Sethi
+1 You will be missed Mark. Thank you for your leadership and mentorship.

Iccha

On 9/23/14, 5:59 AM, stuart.mcla...@hp.com stuart.mcla...@hp.com wrote:

Hi Mark,

Many thanks for your leadership, and keeping glance so enjoyable to work
on over the last few cycles.

-Stuart

From: Mark Washenberger mark.washenber...@markwash.net
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] PTL Non-Candidacy
Message-ID:
CAJyP2C_zR5oK-=dno1e5wv-qsimfqplckzfbl+rhg6-7fhd...@mail.gmail.com
Content-Type: text/plain; charset=utf-8

Greetings,

I will not be running for PTL for Glance for the Kilo release.

I want to thank all of the nice folks I've worked with--especially the
attendees and sponsors of the mid-cycle meetups, which I think were a
major
success and one of the highlights of the project for me.

Cheers,
markwash
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20140922/
85b17570/attachment-0001.html

--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >