Re: [openstack-dev] [mistral] Mistral 0.2 planning

2014-10-03 Thread Renat Akhmerov
Yes,

Currently available resources seem to allow this )

Renat Akhmerov
@ Mirantis Inc.



On 03 Oct 2014, at 12:30, Dmitri Zimine dzim...@stackstorm.com wrote:

 Thanks Renat for running and capturing.
 
 One addition - allocate time for bug fixes, we’ll have quite a few :)
 
 DZ.
 
 On Oct 2, 2014, at 9:56 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Hi,
 
 Mistral team has selected blueprints for Mistral 0.2 which is currently 
 scheduled for 10/31/2014 (may slightly change). Below are the links to 
 release LP page and etherpad with our estimations
 https://launchpad.net/mistral/+milestone/0.2
 https://etherpad.openstack.org/p/mistral-0.2-planning
 
 Please join the discussion if you have something to add/comment or if you’d 
 like to contribute to Mistral 0.2.
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-03 Thread Michael Chapman
On Fri, Oct 3, 2014 at 4:05 AM, Soren Hansen so...@linux2go.dk wrote:

 I'm sorry about my slow responses. For some reason, gmail didn't think
 this was an important e-mail :(

 2014-09-30 18:41 GMT+02:00 Jay Pipes jaypi...@gmail.com:
  On 09/30/2014 08:03 AM, Soren Hansen wrote:
  2014-09-12 1:05 GMT+02:00 Jay Pipes jaypi...@gmail.com:
  How would I go about getting the associated fixed IPs for a network?
  The query to get associated fixed IPs for a network [1] in Nova looks
  like this:
 
  SELECT
   fip.address,
   fip.instance_uuid,
 [...]
  AND fip.instance_uuid IS NOT NULL
  AND i.host = :host
 
  would I have a Riak container for virtual_interfaces that would also
  have instance information, network information, fixed_ip information?
  How would I accomplish the query against a derived table that gets the
  minimum virtual interface ID for each instance UUID?

 What's a minimum virtual interface ID?

 Anyway, I think Clint answered this quite well.

  I've said it before, and I'll say it again. In Nova at least, the
  SQL schema is complex because the problem domain is complex. That
  means lots of relations, lots of JOINs, and that means the best way
  to query for that data is via an RDBMS.
 [...]
  I don't think relying on a central data store is in any conceivable
  way appropriate for a project like OpenStack. Least of all Nova.
 
  I don't see how we can build a highly available, distributed service
  on top of a centralized data store like MySQL.
 [...]
  I don't disagree with anything you say above. At all.

 Really? How can you agree that we can't build a highly available,
 distributed service on top of a centralized data store like MySQL while
 also saying that the best way to handle data in Nova is in an RDBMS?

  For complex control plane software like Nova, though, an RDBMS is
  the best tool for the job given the current lay of the land in open
  source data storage solutions matched with Nova's complex query and
  transactional requirements.
  What transactional requirements?
 
 https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L1654
  When you delete an instance, you don't want the delete to just stop
  half-way through the transaction and leave around a bunch of orphaned
  children.  Similarly, when you reserve something, it helps to not have
  a half-finished state change that you need to go clean up if something
  goes boom.

 Looking at that particular example, it's about deleting an instance and
 all its associated metadata. As we established earlier, these are things
 that would just be in the same key as the instance itself, so it'd just
 be a single key that would get deleted. Easy.

 That said, there will certainly be situations where there'll be a need
 for some sort of anti-entropy mechanism. It just so happens that those
 situations already exist. We're dealing with about a complex distributed
 system.  We're kidding ourselves if we think that any kind of
 consistency is guaranteed, just because our data store favours
 consistency over availability.


I apologize if I'm missing something, but doesn't denormalization to add
join support put the same value in many places, such that an update to that
value is no longer a single atomic transaction? This would appear to
counteract the requirement for strong consistency. If updating a single
value is atomic (as in Riak's consistent mode) then it might be possible to
construct a way to make multiple updates appear atomic, but it would add
many more transactions and many more quorum checks, which would reduce
performance to a crawl.

I also don't really see how a NoSQL system in strong consistency mode is
any different from running MySQL with galera in its failure modes. The
requirement for quorum makes the addition of nodes increase the potential
latency of writes (and reads in some cases) so having large scale doesn't
grant much benefit, if any. Quorum will also prevent nodes on the wrong
side of a partition from being able to access system state (or it will give
them stale state, which is probably just as bad in our case).

I think your goal of having state management that's able to handle network
partitions is a good one, but I don't think the solution is as simple as
swapping out where the state is stored. Maybe in some cases like
split-racks the system needs to react to a network partition by forming its
own independent cell with its own state storage, and when the network heals
it then merges back into the other cluster cleanly? That would be very
difficult to implement, but fun (for some definition of fun).

As a thought experiment, a while ago I considered what would happen if
instead of using a central store, I put a sqlite database behind every
daemon and allowed them to query each other for the data they needed, and
cluster if needed (using raft). Services like nova-scheduler need strong
consistency and would have to cluster to perform their role, but services
like nova-compute would 

[openstack-dev] [Cinder] Get server side exception

2014-10-03 Thread Eduard Matei
Hi,

I'm creating a cinder volume from a glance image (Fedora).
The image is 199 Mb and since it's for testing, i created a volume of size
1GB.
This fails, and puts image in status Error. (without any more info)

Digging through screens i found an exception ImageUnacceptable (size is 2
GB and doesn't fit ...).

Is there a way to get this exception on the client side?
e.g. cinder show VOLUMEID  to contain the exception message

Thanks,

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Horizon] [Trove] Juno RC1 available

2014-10-03 Thread Thierry Carrez
Hello everyone,

Neutron, Horizon, and Trove just published their first release candidate
for the upcoming 2014.2 (Juno) release.

The RC1 tarballs are available for download at:
https://launchpad.net/neutron/juno/juno-rc1
https://launchpad.net/horizon/juno/juno-rc1
https://launchpad.net/trove/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as the 2014.2
final version on October 16. You are therefore strongly encouraged to
test and validate these tarballs !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/neutron/tree/proposed/juno
https://github.com/openstack/horizon/tree/proposed/juno
https://github.com/openstack/trove/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/neutron/+filebug
https://bugs.launchpad.net/horizon/+filebug
https://bugs.launchpad.net/trove/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Neutron, Horizon, and Trove are now
open for Kilo development, and feature freeze restrictions no longer
apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-03 Thread Soren Hansen
2014-10-03 9:00 GMT+02:00 Michael Chapman wop...@gmail.com:
 On Fri, Oct 3, 2014 at 4:05 AM, Soren Hansen so...@linux2go.dk
 wrote:
 That said, there will certainly be situations where there'll be a
 need for some sort of anti-entropy mechanism. It just so happens that
 those situations already exist. We're dealing with about a complex
 distributed system.  We're kidding ourselves if we think that any
 kind of consistency is guaranteed, just because our data store
 favours consistency over availability.
 I apologize if I'm missing something, but doesn't denormalization to
 add join support put the same value in many places, such that an
 update to that value is no longer a single atomic transaction?

Yes.


 This would appear to counteract the requirement for strong
 consistency.

What requirement for strong consistency?


 If updating a single value is atomic (as in Riak's consistent mode)

Admittedly, I'm not 100% up-to-date on Riak, but last I looked, there
wasn't any consistent mode. However, when writing a value, you can
specify that you want all (or a quorum of) replicas to be written to
disk before you get a succesful response. However, this does not imply
transactional support. In other words, if one of the writes fail, it
doesn't get rolled back on the other nodes. You just don't get a
succesful response.


 I also don't really see how a NoSQL system in strong consistency mode
 is any different from running MySQL with galera in its failure modes.

I agree. I never meant to imply that we should run anything in strong
consistency mode. There might be a few operations that require strong
consistency, but they should be exceptional. Quotas sound like a good
example.


 The requirement for quorum makes the addition of nodes increase the
 potential latency of writes (and reads in some cases) so having large
 scale doesn't grant much benefit, if any.

I agree about the requirement for quorum having those effects (also for
e.g. Galera).  I think you are missing my point, though. My concerns are
not whether MySQL can handle the data volume of a large scale OpenStack
deployment.  I'm sure it can. Without even breaking a sweat. MySQL has
been used in countless deployments to handle data sets vastly bigger
than what we're dealing with.

My concern is reliability.

 Quorum will also prevent nodes on the wrong side of a partition from
 being able to access system state (or it will give them stale state,
 which is probably just as bad in our case).

This problem exists today.  Suppose you have a 5 node Galera cluster.
Would you refuse reads on the wrong side of the partition to avoid
providing stale data?

With e.g. Riak it's perfectly possible to accept both reads and writes
on both sides of the partition.

No matter what we do, we need accept the fact that when we handle the
data, it is by definition out of date. It can have changed the
millisecond after we read it from there and started using it.


 I think your goal of having state management that's able to handle
 network partitions is a good one, but I don't think the solution is as
 simple as swapping out where the state is stored.

It kinda is, and it kinda isn't. I never meant to suggest that just
replacing the datastore would solve everything. We need to carefully
look at our use of the data from the datastore and consider the impact
of eventual consistency on this use. On the other hand, as I just
mentioned above, this is a problem that exists right now, today. We're
just ignoring it, because we happen to have a consistent datastore.


 Maybe in some cases like split-racks the system needs to react to a
 network partition by forming its own independent cell with its own
 state storage, and when the network heals it then merges back into the
 other cluster cleanly?  That would be very difficult to implement, but
 fun (for some definition of fun).

Fun, but possible. Riak was designed for this. With an RDBMS I don't
even know how to begin solving something like that.


 As a thought experiment, a while ago I considered what would happen if
 instead of using a central store, I put a sqlite database behind every
 daemon and allowed them to query each other for the data they needed,
 and cluster if needed (using raft).

 Services like nova-scheduler need strong consistency

No, it doesn't. :)

 and would have to cluster to perform their role, but services like
 nova-compute would simply need to store the data concerning the
 resources they are responsible for. This follows the 'place state at
 the edge' kind of design principles that have been discussed in
 various circles.  It falls down in a number of pretty obvious ways,
 and ultimately it would require more work than I am able to put in,
 but I mention it because perhaps it provides you with food for
 thought.

Yeah, a million distributed consistent databases do not a single
distributed, eventually consistent database make :)

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer 

[openstack-dev] [Glance] Let's deprecate the GridFS store (Fwd: [Openstack] [Glance] GridFS Store: Anyone using it?)

2014-10-03 Thread Flavio Percoco
Greetings,

I've sent the email below to the OpenStack users mailing list trying to
get feedback from users on what to do with the GridFS driver.

We haven't had bugs filed on this driver nor users have provided any
feedback. This all leads me to think that it's actually not being used
at all. With that in mind, I propose marking this driver as deprecated
in Kilo.

I wonder if we could simply remove it based on the above _assumption_

Cheers,
Flavio


 Original Message 
Subject: [Openstack] [Glance] GridFS Store: Anyone using it?
Date: Fri, 26 Sep 2014 09:56:52 +0200
From: Flavio Percoco fla...@redhat.com
Reply-To: fla...@redhat.com
Organization: Red Hat
To: openst...@lists.openstack.org

Greetings,

I'm reaching out to see if anyone is using Glance's GridFS store driver
and has any feedback. The driver has been updated with the latest API
changes but no fixes or feedback has been received.

If no one is using it, I'll propose removing it for the Kilo release.
Drivers require maintenance, hence lots of time.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception: django-nose=1.2

2014-10-03 Thread Thierry Carrez
Thomas Goirand wrote:
 murano-dashboard effectively needs django-nose=1.2. As per this:
 
 https://review.openstack.org/125651
 
 it's not a problem for Ubuntu and Debian. Does anyone have a concern
 about this dependency freeze exception?

openstack/requirements will be unfrozen as soon as Swift published a
RC1, so it should only be a couple of days away.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-03 Thread Gary Kotton
Hi,
In the V2 API the attach interface is a blocking operation. This is
problematic. I posted a patch for the V3 to be non blocking -
https://review.openstack.org/#/c/103094/ (as we cannot break existing
API¹s). I would prefer that you do not remove the V3 api. I am not sure
what you mean by saying that we are not parodying to Neutron. The Neutron
API is invoked when the attach is done, that is, a Neutron port needs to
be created.
Thanks
Gary

On 10/2/14, 11:57 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

The os-interface (v2) and os-attach-interfaces (v3) APIs are only used
for the neutron network API, you'll get a NotImplemented if trying to
call the related methods with nova-network [1].

Since we aren't proxying to neutron in the v3 API (v2.1), why does
os-attach-interfaces [2] exist?  Was this just an oversight?  If so,
please allow me to delete it. :)

[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2
014.2.rc1#n310
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compu
te/plugins/v3/attach_interfaces.py?id=2014.2.rc1

-- 

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-03 Thread Igor Degtiarov
Hi folks!

I try too guess do we need in ceilometer checking new patches for
critical errors with pylint?

As far as I know Nova and Sahara and others have such check. Actually
it is not checking of all project but comparing of the number of
errors without new patch and with it, and if diff is more then 0 then
patch are not taken.

I have taken as pattern Sahara's solution and proposed a patch for ceilometer:
https://review.openstack.org/#/c/125906/

Cheers,
Igor Degtiarov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-03 Thread Dina Belova
Igor,

Personally this idea looks really nice to me, as this will help to avoid
strange code being merged and not found via reviewing process.

Cheers,
Dina

On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov idegtia...@mirantis.com
wrote:

 Hi folks!

 I try too guess do we need in ceilometer checking new patches for
 critical errors with pylint?

 As far as I know Nova and Sahara and others have such check. Actually
 it is not checking of all project but comparing of the number of
 errors without new patch and with it, and if diff is more then 0 then
 patch are not taken.

 I have taken as pattern Sahara's solution and proposed a patch for
 ceilometer:
 https://review.openstack.org/#/c/125906/

 Cheers,
 Igor Degtiarov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Eoghan Glynn

 As promised at this week’s TC meeting, I have applied the various blog posts
 and mailing list threads related to changing our governance model to a
 series of patches against the openstack/governance repository [1].
 
 I have tried to include all of the inputs, as well as my own opinions, and
 look at how each proposal needs to be reflected in our current policies so
 we do not drop commitments we want to retain along with the processes we are
 shedding [2].
 
 I am sure we need more discussion, so I have staged the changes as a series
 rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps won’t
 make sense without the changes that come after (hey, just like code!).

Thanks Doug for moving this discussion out of the blogosphere and
into gerrit. That will be very helpful in driving the discussion
forward.

However, given the proximity of the TC elections, should these all
patches all be workflow -1'd as WIP to ensure nothing lands before
the incoming TC is ratified?

(I'm assuming here that the decision-making on these fairly radical
proposals should rest with the new post-election TC - is that a
correct assumption?)

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Thierry Carrez
Eoghan Glynn wrote:
 
 As promised at this week’s TC meeting, I have applied the various blog posts
 and mailing list threads related to changing our governance model to a
 series of patches against the openstack/governance repository [1].

 I have tried to include all of the inputs, as well as my own opinions, and
 look at how each proposal needs to be reflected in our current policies so
 we do not drop commitments we want to retain along with the processes we are
 shedding [2].

 I am sure we need more discussion, so I have staged the changes as a series
 rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps won’t
 make sense without the changes that come after (hey, just like code!).
 
 Thanks Doug for moving this discussion out of the blogosphere and
 into gerrit. That will be very helpful in driving the discussion
 forward.
 
 However, given the proximity of the TC elections, should these all
 patches all be workflow -1'd as WIP to ensure nothing lands before
 the incoming TC is ratified?
 
 (I'm assuming here that the decision-making on these fairly radical
 proposals should rest with the new post-election TC - is that a
 correct assumption?)

Yes, those would be voted on by the kilo-membership of the TC. If the
current membership decides to meet during the election season, it will
be to discuss/brainstorm things, not to finalize or vote on them.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-10-03 Thread Thierry Carrez
OK, it's now official at:
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 7:13 AM, Eoghan Glynn egl...@redhat.com wrote:

 
 As promised at this week’s TC meeting, I have applied the various blog posts
 and mailing list threads related to changing our governance model to a
 series of patches against the openstack/governance repository [1].
 
 I have tried to include all of the inputs, as well as my own opinions, and
 look at how each proposal needs to be reflected in our current policies so
 we do not drop commitments we want to retain along with the processes we are
 shedding [2].
 
 I am sure we need more discussion, so I have staged the changes as a series
 rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps won’t
 make sense without the changes that come after (hey, just like code!).
 
 Thanks Doug for moving this discussion out of the blogosphere and
 into gerrit. That will be very helpful in driving the discussion
 forward.
 
 However, given the proximity of the TC elections, should these all
 patches all be workflow -1'd as WIP to ensure nothing lands before
 the incoming TC is ratified?
 
 (I'm assuming here that the decision-making on these fairly radical
 proposals should rest with the new post-election TC - is that a
 correct assumption?)

Yes, this is absolutely meant to be voted on at some point after the election. 
My goal was to turn the blog posts into specific changes to our current 
policies, in part so we could understand the gaps in what people are saying in 
more abstract terms elsewhere.

Doug

 
 Cheers,
 Eoghan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:

 
 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com wrote:
  As promised at this week’s TC meeting, I have applied the various blog 
  posts and mailing list threads related to changing our governance model to 
  a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions, and 
  look at how each proposal needs to be reflected in our current policies so 
  we do not drop commitments we want to retain along with the processes we 
  are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a series 
  rather than one big patch. Please consider the patches together when 
  commenting. There are many related changes, and some incremental steps 
  won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1] 
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes
 
 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)
 
 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
 After seeing Jay's idea of making a yaml file modeling things and talking to 
 devananda about this I went ahead and tried to graph the relationships out.
 
 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file: 
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
  
 It turns out its really hard to figure out what the relationships are without 
 digging deep into the code for each project, so I am sure I got a few things 
 wrong (along with missing a lot of projects).

The relationships are very important for setting up an optimal gate structure. 
I’m less convinced they are important for setting up the governance structure, 
and I do not think we want a specific gate configuration embedded in the 
governance structure at all. That’s why I’ve tried to describe general 
relationships (“optional inter-project dependences” vs. “strict co-dependent 
project groups” [1]) up until the very last patch in the series [2], which 
redefines the integrated release in terms of those other relationships and a 
base set of projects.

Doug

[1] 
https://review.openstack.org/#/c/125785/2/reference/project-testing-policies.rst
[2] https://review.openstack.org/#/c/125789/

 
 -Deva
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 9:02 AM, Doug Hellmann d...@doughellmann.com wrote:

 
 On Oct 3, 2014, at 7:13 AM, Eoghan Glynn egl...@redhat.com wrote:
 
 
 As promised at this week’s TC meeting, I have applied the various blog posts
 and mailing list threads related to changing our governance model to a
 series of patches against the openstack/governance repository [1].
 
 I have tried to include all of the inputs, as well as my own opinions, and
 look at how each proposal needs to be reflected in our current policies so
 we do not drop commitments we want to retain along with the processes we are
 shedding [2].
 
 I am sure we need more discussion, so I have staged the changes as a series
 rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps won’t
 make sense without the changes that come after (hey, just like code!).
 
 Thanks Doug for moving this discussion out of the blogosphere and
 into gerrit. That will be very helpful in driving the discussion
 forward.
 
 However, given the proximity of the TC elections, should these all
 patches all be workflow -1'd as WIP to ensure nothing lands before
 the incoming TC is ratified?
 
 (I'm assuming here that the decision-making on these fairly radical
 proposals should rest with the new post-election TC - is that a
 correct assumption?)
 
 Yes, this is absolutely meant to be voted on at some point after the 
 election. My goal was to turn the blog posts into specific changes to our 
 current policies, in part so we could understand the gaps in what people are 
 saying in more abstract terms elsewhere.

It also looks like one of the early changes will need to be rebased before it 
can merge. Rather than lose the discussion context, I will wait as long as 
possible to do that, so please continue commenting on the current draft.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Anne Gentle
On Fri, Oct 3, 2014 at 8:07 AM, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
 posts and mailing list threads related to changing our governance model to
 a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions,
 and look at how each proposal needs to be reflected in our current policies
 so we do not drop commitments we want to retain along with the processes we
 are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a
 series rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps
 won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1]
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


 After seeing Jay's idea of making a yaml file modeling things and talking
 to devananda about this I went ahead and tried to graph the relationships
 out.

 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png

 It turns out its really hard to figure out what the relationships are
 without digging deep into the code for each project, so I am sure I got a
 few things wrong (along with missing a lot of projects).


 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s why I’ve
 tried to describe general relationships (“optional inter-project
 dependences” vs. “strict co-dependent project groups” [1]) up until the
 very last patch in the series [2], which redefines the integrated release
 in terms of those other relationships and a base set of projects.


I'm reading and reading and reading and my thoughts keep returning to,
we're optimizing only for dev. :)

I need to either get over that or decide what parts need tweaking for docs
and support optimization. I'll get going on reviews -- thanks a bunch for
all this compilation and for the good blog writing. Much appreciated.

Anne


 Doug

 [1]
 https://review.openstack.org/#/c/125785/2/reference/project-testing-policies.rst
 [2] https://review.openstack.org/#/c/125789/


 -Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL Election Conclusion and Results

2014-10-03 Thread Tristan Cacqueray
Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for PTL for this election.
A healthy, open process breeds trust in our decision making capability
thank you to all those who make this process possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:

*  Compute (Nova)
** Michael Still
* Object Storage (Swift)
** John Dickinson
* Image Service (Glance)
** Nikhil Komawar
* Identity (Keystone)
** Morgan Fainberg
* Dashboard (Horizon)
** David Lyle
* Networking (Neutron)
** Kyle Mestery
* Block Storage (Cinder)
** Mike Perez
* Metering/Monitoring (Ceilometer)
** Eoghan Glynn
* Orchestration (Heat)
** Angus Salkeld
* Database Service (Trove)
** Nikhil Manchanda
* Bare metal (Ironic)
** Devananda van der Veen
* Common Libraries (Oslo)
** Doug Hellmann
* Infrastructure
** James E. Blair
* Documentation
** Anne Gentle
* Quality Assurance (QA)
** Matthew Treinish
* Deployment (TripleO)
** Clint Byrum
* Release cycle management
** Thierry Carrez
* Data Processing Service (Sahara)
** Sergey Lukjanov
* Message Service (Zaqar)
** Flavio Percoco
* Key Management Service (Barbican)
** Douglas Mendizabal
* DNS Services (Designate)
** Kiall Mac Innes
* Shared File Systems (Manila)
** Ben Swartzlander

Election Results:
* Cinder:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_3bc8db78251af391
* TripleO:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d8d7335f4de67d94

Shortly I will post the announcement opening TC nominations and then we
are into the TC election process.

Thank you to all involved in the PTL election process,
Tristan.



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Sean Dague
On 10/03/2014 09:25 AM, Anne Gentle wrote:
 
 
 On Fri, Oct 3, 2014 at 8:07 AM, Doug Hellmann d...@doughellmann.com
 mailto:d...@doughellmann.com wrote:
 
 
 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com
 mailto:joe.gord...@gmail.com wrote:
 

 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen
 devananda@gmail.com mailto:devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann
 d...@doughellmann.com mailto:d...@doughellmann.com wrote:
  As promised at this week’s TC meeting, I have applied the various 
 blog posts and mailing list threads related to changing our governance model 
 to a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own 
 opinions, and look at how each proposal needs to be reflected in our current 
 policies so we do not drop commitments we want to retain along with the 
 processes we are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as 
 a series rather than one big patch. Please consider the patches together 
 when commenting. There are many related changes, and some incremental steps 
 won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1] 
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad
 as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


 After seeing Jay's idea of making a yaml file modeling things and
 talking to devananda about this I went ahead and tried to graph
 the relationships out.

 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML
 file: 
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
  
 It turns out its really hard to figure out what the relationships
 are without digging deep into the code for each project, so I am
 sure I got a few things wrong (along with missing a lot of projects).
 
 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s
 why I’ve tried to describe general relationships (“optional
 inter-project dependences” vs. “strict co-dependent project groups”
 [1]) up until the very last patch in the series [2], which redefines
 the integrated release in terms of those other relationships and a
 base set of projects.
 
 
 I'm reading and reading and reading and my thoughts keep returning to,
 we're optimizing only for dev. :)
 
 I need to either get over that or decide what parts need tweaking for
 docs and support optimization. I'll get going on reviews -- thanks a
 bunch for all this compilation and for the good blog writing. Much
 appreciated.

The relationships are also quite important for deployment units, because
we're talking about what minimal set of things we're going to say work
together. And we're going to be dictating the minimum lock step upgrade
unit.

Any project that fully stands on it's own (like Swift or Ironic, given
that keystone is optional) can be stood up on their own. Ok, they go in
one bucket and you call tell people, you want this function, just
install this project, it's a vertical on it's own. Heat works quite well
against a compute stack you don't run yourself (teams doing this in HP
all the time). I expect Zaqar to be like Swift, and be a thing you can
just have.

That's not the case for the compute stack, for better or worse. And,
based on the User Surveys, the compute stack is what most people are
trying to get out of OpenStack right now. So we should unconfuse that,
create a smaller basic building block (that people can understand), and
provide guidance on how you could expand your function with our vast
array of great expansion sets.

OpenStack is enough parts that you can mix and match as much as you
want, but much like the 600 config options in Nova, we really can't
document every combination of things.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-03 Thread Matt Riedemann



On 10/2/2014 10:30 PM, Christopher Yeoh wrote:

On Thu, 02 Oct 2014 15:57:55 -0500
Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The os-interface (v2) and os-attach-interfaces (v3) APIs are only
used for the neutron network API, you'll get a NotImplemented if
trying to call the related methods with nova-network [1].

Since we aren't proxying to neutron in the v3 API (v2.1), why does
os-attach-interfaces [2] exist?  Was this just an oversight?  If so,
please allow me to delete it. :)


The proxying work was not done in Juno due to time constraints, but I
think we should be able to cover it early in Kilo (most of the patches
are pretty much ready). To have a V2.1 which is functionality
equivalent to V2 (allowing us to remove the V2 code) we have to
implement proxying.

Chris



[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
[2]
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK I didn't realize proxying to neutron was back on the table for v2.1 
but that makes sense.  So the floating IP extensions will be back too I 
assume?  And that probably means that this [1] proxy patch for quota 
usages and limits is back on the table for v2.1?  If so, I can probably 
restore that and start cleaning it up.  Back when work stopped on it in 
Icehouse I think the thing that was left to do was add it to the base 
Network API so we didn't have the is_neutron checks in the compute API 
code (so this will work like the show_ports and list_ports network APIs 
- implemented by the neutronv2 API and not implemented by the 
nova-network API).


[1] https://review.openstack.org/#/c/43822/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Salvatore Orlando
Hi,

Quota management is currently one of those things where every openstack
project does its own thing. While quotas are obviously managed in a similar
way for each project, there are subtle differences which ultimately result
in lack of usability.

I recall that in the past there have been several calls for unifying quota
management. The blueprint [1] for instance, hints at the possibility of
storing quotas in keystone.
On the other hand, the blazar project [2, 3] seems to aim at solving this
problem for good enabling resource reservation and therefore potentially
freeing openstack projects from managing and enforcing quotas.

While Blazar is definetely a good thing to have, I'm not entirely sure we
want to make it a required component for every deployment. Perhaps single
projects should still be able to enforce quota. On the other hand, at least
on paper, the idea of making Keystone THE endpoint for managing quotas,
and then letting the various project enforce them, sounds promising - is
there any reason for which this blueprint is stalled to the point that it
seems forgotten now?

I'm coming to the mailing list with these random questions about quota
management, for two reasons:
1) despite developing and using openstack on a daily basis I'm still
confused by quotas
2) I've found a race condition in neutron quotas and the fix is not
trivial. So, rather than start coding right away, it might probably make
more sense to ask the community if there is already a known better approach
to quota management - and obviously enforcement.

Thanks in advance,
Salvatore

[1] https://blueprints.launchpad.net/keystone/+spec/service-metadata
[2] https://wiki.openstack.org/wiki/Blazar
[3] https://review.openstack.org/#/q/project:stackforge/blazar,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Chris Dent

On Fri, 3 Oct 2014, Anne Gentle wrote:


I'm reading and reading and reading and my thoughts keep returning to,
we're optimizing only for dev. :)


Yes, +many.

In my reading it seems like we are trying to optimize the process for
developers which is exactly the opposite of what we want to be doing if
we want to address the perceived quality problems that we see. We should
be optimizing for the various user groups (which, I admit, have been
identified pretty well in some of the blog posts).

This would, of course, mean enhancing the docs (and other cross
project) process...

At the moment we're trying to create governance structures that
incrementally improve the existing model for how development is
being done.

I think we should consider more radical changes, changes which allow
us to work on what users want: an OpenStack that works.

To do that I think we need to figure out two things:

* how to fail faster
* how to stop thinking of ourselves as being on particular projects

I got hired to work on telemetry, but I've managed to do most of my
work in QA related things because what's the point of making new
stuff if you can't test it reliably? What I'd really like to say my
job is is making OpenStack the best it possibly can be.

If we keep focusing on the various services as entangled but separate
and competing interests rather than on how to make OpenStack good, we're
missing the point and the boat.

Our job as developers is to make things easier (or at least
possible) for the people who use the stuff we build. Naturally we
want to make that as frictionless as possible, but not at the cost
of the people's ease.

There are many perverse incentives in OpenStack's culture which
encourage people to hoard. For example it is useful to keep code in
one's own team's repository because the BPs, reviews and bugs which
reflect on that repository reflect on the value of the team.

Who is that good for?

So much of the talk is about trying to figure out how to make the
gate more resilient. No! How about we listen to what the gate is
telling us: Our code is full of race conditions, a memory pig,
poorly defined contracts, and just downright tediously slow and
heavy. And _fix that_.

What I think we need to do to improve is enhance the granularity at
which someone can participate. Smaller repos, smaller teams, cleaner
boundaries between things. Disconnected (and rolling) release cycles.
Achieve fail fast by testing in (much) smaller lumps before code
ever reaches the global CI. You know: make better tests locally that
confirm good boundaries. Don't run integration tests until the unit,
pep8 and in-tree functional tests have passed. If there is a
failure: exit! FAIL! Don't run all the rest of the tests uselessly.

We need to not conflate the code and its structure with the
structure of our governance. We need to put responsibility for the
quality of the code on to the people who make it, not big infra.
We need to make it easier for people to participate in that quality
making. And most importantly we need to make sure the users are
driving what we do and we need to make it far easier for them to do
that driving.

Obviously there are many more issues than these, but I think some of
the above is being left out of the discussion, and this message
needs to stop.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Chris Dent

On Fri, 3 Oct 2014, Sean Dague wrote:

OpenStack is enough parts that you can mix and match as much as you
want, but much like the 600 config options in Nova, we really can't
document every combination of things.


People seem to talk about this flexibility as if it were a good
thing. It's not. There's tyranny of choice all over OpenStack. Is
that good for real people or just large players and our corporate
hosts?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Ceilometer] IPMI Sensor naming in Ceilometer

2014-10-03 Thread Jim Mankovich

Hello All,

In a nutshell, I've found the current IPMI Ceilometer sensor naming 
structure difficult to deal with from a programmatic perspective if a 
consumer wants to simply read all the sensors for a given Ironic Node.   
The current naming scheme just doesn't seem to provide a simple fast way 
to get all the sensors for a given Ironic Node from Ceilometer.


I detailed what I learned and proposed a potential alternative naming 
structure to the Ironic/Ceilometer bugs as

https://bugs.launchpad.net/ironic/+bug/1377157

I am curious as to what the intended use model was for the current 
naming scheme?


Regards,
Jim

--
--- Jim Mankovich | jm...@hp.com (US Mountain Time) ---

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Robert van Leeuwen
 I recall that in the past there have been several calls for unifying quota 
 management.
 The blueprint [1] for instance, hints at the possibility of storing quotas in 
 keystone.
As an end-user: +1
For me it totally makes sense to put the quotas and access together.

 On the other hand, the blazar project [2, 3] seems to aim at solving this 
 problem for good
 enabling resource reservation and therefore potentially freeing openstack 
 projects from managing and enforcing quotas.
 While Blazar is definetely a good thing to have, I'm not entirely sure we 
 want to make it a required component for every deployment.

Totally agree, I'd rather not run more components running then strictly 
necessary.
It is already quite a lot of work to test all components each time we do 
upgrade's.
Adding complexity / more moving parts is not on the top of my list unless 
strictly necessary.

Cheers,
Robert van Leeuwen


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gnocchi][Ceilometer] Gnocchi performance tests

2014-10-03 Thread Ilya Tyaptin
Hi, folks!

We decided to make a performance gnocchi testing as a part of summit talk
preparation.

Folks who are working at Gnocchi these days, may you add to etherpad test
scenarios which are most helpful and interesting in your opinion?


Etherpad - https://etherpad.openstack.org/p/gnocchi_performance_tests

-- 

Best regards,

Tyaptin Ilia,

Junior Software Engineer.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 9:25 AM, Anne Gentle a...@openstack.org wrote:

 
 
 On Fri, Oct 3, 2014 at 8:07 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com wrote:
  As promised at this week’s TC meeting, I have applied the various blog 
  posts and mailing list threads related to changing our governance model to 
  a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions, and 
  look at how each proposal needs to be reflected in our current policies so 
  we do not drop commitments we want to retain along with the processes we 
  are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a 
  series rather than one big patch. Please consider the patches together 
  when commenting. There are many related changes, and some incremental 
  steps won’t make sense without the changes that come after (hey, just like 
  code!).
 
  Doug
 
  [1] 
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes
 
 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)
 
 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
 After seeing Jay's idea of making a yaml file modeling things and talking to 
 devananda about this I went ahead and tried to graph the relationships out.
 
 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file: 
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
  
 It turns out its really hard to figure out what the relationships are 
 without digging deep into the code for each project, so I am sure I got a 
 few things wrong (along with missing a lot of projects).
 
 The relationships are very important for setting up an optimal gate 
 structure. I’m less convinced they are important for setting up the 
 governance structure, and I do not think we want a specific gate 
 configuration embedded in the governance structure at all. That’s why I’ve 
 tried to describe general relationships (“optional inter-project dependences” 
 vs. “strict co-dependent project groups” [1]) up until the very last patch in 
 the series [2], which redefines the integrated release in terms of those 
 other relationships and a base set of projects.
 
 
 I'm reading and reading and reading and my thoughts keep returning to, we're 
 optimizing only for dev. :)

Yes, that’s a gap in what has been proposed. There is some discussion about 
addressing cross-project teams more fully on 
https://review.openstack.org/#/c/125789/ and I agree we need to be more 
explicit about our intent, both in saying which cross-project teams are 
official and what that means to the other teams.

Doug

 
 I need to either get over that or decide what parts need tweaking for docs 
 and support optimization. I'll get going on reviews -- thanks a bunch for all 
 this compilation and for the good blog writing. Much appreciated.
 
 Anne
  
 Doug
 
 [1] 
 https://review.openstack.org/#/c/125785/2/reference/project-testing-policies.rst
 [2] https://review.openstack.org/#/c/125789/
 
 
 -Deva
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Duncan Thomas
Taking quota out of the service / adding remote calls for quota
management is going to make things fragile - you've somehow got to
deal with the cases where your quota manager is slow, goes away,
hiccups, drops connections etc. You'll also need some way of
reconciling actual usage against quota usage periodically, to detect
problems.

On 3 October 2014 15:03, Salvatore Orlando sorla...@nicira.com wrote:
 Hi,

 Quota management is currently one of those things where every openstack
 project does its own thing. While quotas are obviously managed in a similar
 way for each project, there are subtle differences which ultimately result
 in lack of usability.

 I recall that in the past there have been several calls for unifying quota
 management. The blueprint [1] for instance, hints at the possibility of
 storing quotas in keystone.
 On the other hand, the blazar project [2, 3] seems to aim at solving this
 problem for good enabling resource reservation and therefore potentially
 freeing openstack projects from managing and enforcing quotas.

 While Blazar is definetely a good thing to have, I'm not entirely sure we
 want to make it a required component for every deployment. Perhaps single
 projects should still be able to enforce quota. On the other hand, at least
 on paper, the idea of making Keystone THE endpoint for managing quotas,
 and then letting the various project enforce them, sounds promising - is
 there any reason for which this blueprint is stalled to the point that it
 seems forgotten now?

 I'm coming to the mailing list with these random questions about quota
 management, for two reasons:
 1) despite developing and using openstack on a daily basis I'm still
 confused by quotas
 2) I've found a race condition in neutron quotas and the fix is not trivial.
 So, rather than start coding right away, it might probably make more sense
 to ask the community if there is already a known better approach to quota
 management - and obviously enforcement.

 Thanks in advance,
 Salvatore

 [1] https://blueprints.launchpad.net/keystone/+spec/service-metadata
 [2] https://wiki.openstack.org/wiki/Blazar
 [3] https://review.openstack.org/#/q/project:stackforge/blazar,n,z

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi][Ceilometer] Gnocchi performance tests

2014-10-03 Thread Julien Danjou
On Fri, Oct 03 2014, Ilya Tyaptin wrote:

 We decided to make a performance gnocchi testing as a part of summit talk
 preparation.

 Folks who are working at Gnocchi these days, may you add to etherpad test
 scenarios which are most helpful and interesting in your opinion?


 Etherpad - https://etherpad.openstack.org/p/gnocchi_performance_tests

Cool idea, I've wrote a bunch of basic scenarios that could be
performed. We're working on some new features that might be merged in a
few days, maybe we'll have more by the time you start the benchmark!

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Dean Troyer
On Fri, Oct 3, 2014 at 9:32 AM, Chris Dent chd...@redhat.com wrote:

 People seem to talk about this flexibility as if it were a good
 thing. It's not. There's tyranny of choice all over OpenStack. Is
 that good for real people or just large players and our corporate
 hosts?


+

We have a history of trying to be all things to all people, and our
structure makes saying no hard and unpopular.  It appears as though
competitors with checkbooks are saying no to each other, not that technical
leaders are saying yes to good projects and implementations that have
technical merit.

I believe that part of the all-things model is due to our Corporate
structure which has the notion that they all have to differentiate their
cloud from the others.  For more on how that is working out see DefCore, et
al.  At the technical level we like to say we are beyond that, and I
believe that many really are.  But many is not all and the results are
clearly not acceptable because where we are having this conversation...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] structure of supported_instances field (architecture, hypervisor type, vm mode) ?

2014-10-03 Thread Murray, Paul (HP Cloud)
Hi All,

I have a question about information used to determine if a host supports an 
image.

The virt drivers all provide a list of triples of the form (architecture, 
hypervisor type, vm mode). Each triple is compared to the corresponding three 
image properties in a request in the scheduler filter called 
image_props_filter.py to determine if a host can support the given image. This 
information is held in a field in the compute_nodes table called 
supported_instances.

I am adding the supported_instances field to the ComputeNode object. As it has 
a definite structure I want to make sure the structure is checked.

So here is my question:

Am I right in thinking that there are always three properties in each element 
in the list (architecture, hypervisor type, vm mode)?

As far as I can tell, these are the only values checked in the filter and all 
the virt drivers always generate those three. If so, is it reasonable for me to 
insist the elements of the supported_instance list have three strings in them 
when the object checks the data type? I don't want to restrict the structure to 
being strictly a list of triples if there might in fact be a variable number of 
properties provided.

Paul
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-10-03 Thread Nick Chase
Yes, these are great, thanks.  We'll go through and see what we can pull.
Thank you!

  Nick

On Tue, Sep 30, 2014 at 3:26 AM, Akilesh K akilesh1...@gmail.com wrote:

 Sorry the correct links are
 1. Comparison between networking devices and linux software components
 http://fosskb.wordpress.com/2014/06/25/a-bite-of-virtual-linux-networking/
 2. Openstack ovs plugin configuration for single/multi machine setup
 http://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/
 3. Neutron ovs plugin layer 2 connectivity
 http://fosskb.wordpress.com/2014/06/19/l2-connectivity-in-openstack-using-openvswitch-mechanism-driver/
 4. Layer 3 connectivity using neutron-l3-agent
 http://fosskb.wordpress.com/2014/09/15/l3-connectivity-using-neutron-l3-agent/

 On Tue, Sep 30, 2014 at 12:50 PM, Andreas Scheuring 
 scheu...@linux.vnet.ibm.com wrote:

 Hi Ageeleshwar,
 the links you provided are wordpress admin links and require a login. Is
 there also a public link available?
 Thanks
 --
 Andreas
 (irc: scheuran)


 On Tue, 2014-09-30 at 09:33 +0530, Akilesh K wrote:
  Hi,
 
  I saw the table of contents. I have posted documents on configuring
  openstack neutron-openvswitch-plugin, comparison between networking
  devices and thier linux software components and also about the working
  principles of neutron-ovs-plugin at layer 2 and neutron-l3-agent at
  layer 3 . My intention with the posts was to aid begginers in
  debugging neutron issues.
 
 
  The problem is that I am not sure where exactly these posts fit in the
  topic of contents. Anyone with suggestions please reply to me. Below
  are the link to the blog posts
 
 
  1. Comparison between networking devices and linux software components
 
  2. Openstack ovs plugin configuration for single/multi machine setup
 
  3. Neutron ovs plugin layer 2 connectivity
 
  4. Layer 3 connectivity using neutron-l3-agent
 
 
  I would be glad to include sub sections in any of these posts if that
  helps.
 
 
  Thank you,
  Ageeleshwar K
 
 
  On Tue, Sep 30, 2014 at 2:36 AM, Nicholas Chase nch...@mirantis.com
  wrote:
  As you know, we're always looking for ways for people to be
  able to contribute to Docs, but we do understand that there's
  a certain amount of pain involved in dealing with Docbook.  So
  to try and make this process easier, we're going to try an
  experiment.
 
  What we've put together is a system where you can update a
  wiki with links to content in whatever form you've got it --
  gist on github, wiki page, blog post, whatever -- and we have
  a dedicated resource that will turn it into actual
  documentation, in Docbook. If you want to be added as a
  co-author on the patch, make sure to provide us the email
  address you used to become a Foundation member.
 
  Because we know that the networking documentation needs
  particular attention, we're starting there.  We have a
  Networking Guide, from which we will ultimately pull
  information to improve the networking section of the admin
  guide.  The preliminary Table of Contents is here:
  https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
  instructions for contributing are as follows:
 
   1. Pick an existing topic or create a new topic. For new
  topics, we're primarily interested in deployment
  scenarios.
   2. Develop content (text and/or diagrams) in a format
  that supports at least basic markup (e.g., titles,
  paragraphs, lists, etc.).
   3. Provide a link to the content (e.g., gist on
  github.com, wiki page, blog post, etc.) under the
  associated topic.
   4. Send e-mail to reviewers network...@openstacknow.com.
   5. A writer turns the content into an actual patch, with
  tracking bug, and docs reviewers (and the original
  author, we would hope) make sure it gets reviewed and
  merged.
 
  Please let us know if you have any questions/comments.
  Thanks!
 
    Nick
  --
  Nick Chase
  1-650-567-5640
  Technical Marketing Manager, Mirantis
  Editor, OpenStack:Now
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Nova] structure of supported_instances field (architecture, hypervisor type, vm mode) ?

2014-10-03 Thread Daniel P. Berrange
On Fri, Oct 03, 2014 at 03:16:02PM +, Murray, Paul (HP Cloud) wrote:
 Hi All,
 
 I have a question about information used to determine if a host supports an 
 image.
 
 The virt drivers all provide a list of triples of the form (architecture,
 hypervisor type, vm mode). Each triple is compared to the corresponding three
 image properties in a request in the scheduler filter called 
 image_props_filter.py
 to determine if a host can support the given image. This information is held 
 in a
 field in the compute_nodes table called supported_instances.
 
 I am adding the supported_instances field to the ComputeNode object. As it 
 has a
 definite structure I want to make sure the structure is checked.

 So here is my question:
 
 Am I right in thinking that there are always three properties in each element
 in the list (architecture, hypervisor type, vm mode)?
 
 As far as I can tell, these are the only values checked in the filter and all
 the virt drivers always generate those three. If so, is it reasonable for me
 to insist the elements of the supported_instance list have three strings in
 them when the object checks the data type? I don't want to restrict the 
 structure to being strictly a list of triples if there might in fact be a
 variable number of properties provided.

Yes, there should always be exactly 3 elements in the list and they should
all have non-NULL values. Valid values are defined as constants in the
nova.compute.{arch,hvtype,vm_mode}. So any deviation from this is considered
to be a bug.

FYI I'm currently working on changing the get_available_resources() method
to return a formal object, instead of its horrible undocumented dict().
With this, the current list of lists / JSON used for supported_instances
will turn into a formal object with strict validation.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2014-10-03 Thread Tristan Cacqueray
Candidate proposals for the Technical Committee positions (6 positions)
are now open and will remain open until 05:59 UTC October 10, 2014.

Candidates for the Technical Committee Positions: Any Foundation
individual member can propose their candidacy for an available,
directly-elected TC seat. [0] (except the seven TC members who were
elected for a one-year seat last April: Thierry Carrez, Jay Pipes,
Vishvananda Ishaya, Michael Still, Jim Blair, Mark McClain, Devananda
van der Veen) [1]

Propose your candidacy by sending an email to the openstack-dev at
lists.openstack.org mailing-list, with the subject: TC candidacy.
Please start your own thread so we have one thread per candidate. Since
there will be many people voting for folks with whom they might not have
worked, including a platform or statement to help voters make an
informed choice is recommended, though not required.

NEW: In order to help the electorate learn more about the candidates we
have posted a template of questions to which all candidates are
requested to respond. [2]

Anita and I will confirm candidates with an email to the candidate
thread as well as create a link to the confirmed candidate's proposal
email on the wikipage for this election. [1]

The election will be held from October 10 through to 13:00 UTC October
17, 2014. The electorate are the Foundation individual members that are
also committers for one of the official programs projects [3] over the
Icehouse-Juno timeframe (September 26, 2013 06:00 UTC to September 26,
2014 05:59 UTC), as well as the extra-ATCs who are acknowledged by the
TC. [4]

Please see the wikipage for additional details about this election. [1]

If you have any questions please be sure to either voice them on the
mailing list or email Anita or myself [5] or contact Anita or myself on IRC.

Thank you, and I look forward to reading your candidate proposals,
Tristan

[0] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
[1] https://wiki.openstack.org/wiki/TC_Elections_October_2014
[2]
https://wiki.openstack.org/wiki/TC_Elections_October_2014#TC_Election_Questions
[3]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=sept-2014-elections
Note the tag for this repo, sept-2014-elections.
[4]
http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=sept-2014-elections
[5] Anita: anteaya at anteaya dot info
 Tristan: tristan dot cacqueray at enovance dot com



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] structure of supported_instances field (architecture, hypervisor type, vm mode) ?

2014-10-03 Thread Jay Pipes



On 10/03/2014 11:29 AM, Daniel P. Berrange wrote:

On Fri, Oct 03, 2014 at 03:16:02PM +, Murray, Paul (HP Cloud) wrote:

Hi All,

I have a question about information used to determine if a host supports an 
image.

The virt drivers all provide a list of triples of the form (architecture,
hypervisor type, vm mode). Each triple is compared to the corresponding three
image properties in a request in the scheduler filter called 
image_props_filter.py
to determine if a host can support the given image. This information is held in 
a
field in the compute_nodes table called supported_instances.

I am adding the supported_instances field to the ComputeNode object. As it has a
definite structure I want to make sure the structure is checked.

So here is my question:

Am I right in thinking that there are always three properties in each element
in the list (architecture, hypervisor type, vm mode)?

As far as I can tell, these are the only values checked in the filter and all
the virt drivers always generate those three. If so, is it reasonable for me
to insist the elements of the supported_instance list have three strings in
them when the object checks the data type? I don't want to restrict the
structure to being strictly a list of triples if there might in fact be a
variable number of properties provided.


Yes, there should always be exactly 3 elements in the list and they should
all have non-NULL values. Valid values are defined as constants in the
nova.compute.{arch,hvtype,vm_mode}. So any deviation from this is considered
to be a bug.

FYI I'm currently working on changing the get_available_resources() method
to return a formal object, instead of its horrible undocumented dict().
With this, the current list of lists / JSON used for supported_instances
will turn into a formal object with strict validation.


yay!

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] structure of supported_instances field (architecture, hypervisor type, vm mode) ?

2014-10-03 Thread Sylvain Bauza


Le 03/10/2014 17:29, Daniel P. Berrange a écrit :

On Fri, Oct 03, 2014 at 03:16:02PM +, Murray, Paul (HP Cloud) wrote:

Hi All,

I have a question about information used to determine if a host supports an 
image.

The virt drivers all provide a list of triples of the form (architecture,
hypervisor type, vm mode). Each triple is compared to the corresponding three
image properties in a request in the scheduler filter called 
image_props_filter.py
to determine if a host can support the given image. This information is held in 
a
field in the compute_nodes table called supported_instances.

I am adding the supported_instances field to the ComputeNode object. As it has a
definite structure I want to make sure the structure is checked.

So here is my question:

Am I right in thinking that there are always three properties in each element
in the list (architecture, hypervisor type, vm mode)?

As far as I can tell, these are the only values checked in the filter and all
the virt drivers always generate those three. If so, is it reasonable for me
to insist the elements of the supported_instance list have three strings in
them when the object checks the data type? I don't want to restrict the
structure to being strictly a list of triples if there might in fact be a
variable number of properties provided.

Yes, there should always be exactly 3 elements in the list and they should
all have non-NULL values. Valid values are defined as constants in the
nova.compute.{arch,hvtype,vm_mode}. So any deviation from this is considered
to be a bug.

FYI I'm currently working on changing the get_available_resources() method
to return a formal object, instead of its horrible undocumented dict().
With this, the current list of lists / JSON used for supported_instances
will turn into a formal object with strict validation.



Oh thanks Daniel for that effort, it will ease a lot the scheduler split 
because we want to change into objects all the updates to the Scheduler 
or the request_data dict when asking for selecting a destination.


Thanks,
-Sylvain


Regards,
Daniel



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] structure of supported_instances field (architecture, hypervisor type, vm mode) ?

2014-10-03 Thread Matt Riedemann



On 10/3/2014 10:29 AM, Daniel P. Berrange wrote:

On Fri, Oct 03, 2014 at 03:16:02PM +, Murray, Paul (HP Cloud) wrote:

Hi All,

I have a question about information used to determine if a host supports an 
image.

The virt drivers all provide a list of triples of the form (architecture,
hypervisor type, vm mode). Each triple is compared to the corresponding three
image properties in a request in the scheduler filter called 
image_props_filter.py
to determine if a host can support the given image. This information is held in 
a
field in the compute_nodes table called supported_instances.

I am adding the supported_instances field to the ComputeNode object. As it has a
definite structure I want to make sure the structure is checked.

So here is my question:

Am I right in thinking that there are always three properties in each element
in the list (architecture, hypervisor type, vm mode)?

As far as I can tell, these are the only values checked in the filter and all
the virt drivers always generate those three. If so, is it reasonable for me
to insist the elements of the supported_instance list have three strings in
them when the object checks the data type? I don't want to restrict the
structure to being strictly a list of triples if there might in fact be a
variable number of properties provided.


Yes, there should always be exactly 3 elements in the list and they should
all have non-NULL values. Valid values are defined as constants in the
nova.compute.{arch,hvtype,vm_mode}. So any deviation from this is considered
to be a bug.

FYI I'm currently working on changing the get_available_resources() method
to return a formal object, instead of its horrible undocumented dict().
With this, the current list of lists / JSON used for supported_instances
will turn into a formal object with strict validation.

Regards,
Daniel



Ahh, reminds me of the good old days when we had mega-tuple 
requested_networks in the network_api.allocate_for_instance code.  Those 
were the days...


Still need to make Dan Smith a t-shirt for cleaning that up.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Jay Pipes

On 10/03/2014 10:06 AM, Chris Dent wrote:

On Fri, 3 Oct 2014, Anne Gentle wrote:


I'm reading and reading and reading and my thoughts keep returning to,
we're optimizing only for dev. :)


Yes, +many.


plus infinity.


In my reading it seems like we are trying to optimize the process for
developers which is exactly the opposite of what we want to be doing if
we want to address the perceived quality problems that we see. We should
be optimizing for the various user groups (which, I admit, have been
identified pretty well in some of the blog posts).


Precisely.


This would, of course, mean enhancing the docs (and other cross
project) process...


Yep! This is something I harped on quite a bit in my second blog post.


At the moment we're trying to create governance structures that
incrementally improve the existing model for how development is
being done.

I think we should consider more radical changes, changes which allow
us to work on what users want: an OpenStack that works.


Yay!


To do that I think we need to figure out two things:

* how to fail faster
* how to stop thinking of ourselves as being on particular projects


More yay!


I got hired to work on telemetry, but I've managed to do most of my
work in QA related things because what's the point of making new
stuff if you can't test it reliably? What I'd really like to say my
job is is making OpenStack the best it possibly can be.

If we keep focusing on the various services as entangled but separate
and competing interests rather than on how to make OpenStack good, we're
missing the point and the boat.

Our job as developers is to make things easier (or at least
possible) for the people who use the stuff we build. Naturally we
want to make that as frictionless as possible, but not at the cost
of the people's ease.

There are many perverse incentives in OpenStack's culture which
encourage people to hoard. For example it is useful to keep code in
one's own team's repository because the BPs, reviews and bugs which
reflect on that repository reflect on the value of the team.

Who is that good for?

So much of the talk is about trying to figure out how to make the
gate more resilient. No! How about we listen to what the gate is
telling us: Our code is full of race conditions, a memory pig,
poorly defined contracts, and just downright tediously slow and
heavy. And _fix that_.


I think it's important to discuss both things: the gate structure (and 
various inefficiencies the gate *policies* create within the system) as 
well as the fragility and design of the code bases themselves.



What I think we need to do to improve is enhance the granularity at
which someone can participate. Smaller repos, smaller teams, cleaner
boundaries between things. Disconnected (and rolling) release cycles.
Achieve fail fast by testing in (much) smaller lumps before code
ever reaches the global CI. You know: make better tests locally that
confirm good boundaries. Don't run integration tests until the unit,
pep8 and in-tree functional tests have passed. If there is a
failure: exit! FAIL! Don't run all the rest of the tests uselessly.


Yup, ++.


We need to not conflate the code and its structure with the
structure of our governance. We need to put responsibility for the
quality of the code on to the people who make it, not big infra.
We need to make it easier for people to participate in that quality
making. And most importantly we need to make sure the users are
driving what we do and we need to make it far easier for them to do
that driving.

Obviously there are many more issues than these, but I think some of
the above is being left out of the discussion, and this message
needs to stop.


Amen, brother.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-03 Thread Steven Dake

On 10/01/2014 07:08 PM, Angus Lees wrote:

On Wed, 1 Oct 2014 09:05:23 PM Fox, Kevin M wrote:

Has anyone figured out a way of having a floating ip like feature with
docker so that you can have rabbitmq, mysql, or ceph mon's at fixed ip's
and be able to migrate them around from physical host to physical host and
still have them at fixed locations that you can easily put in static config
files?

This is part of the additional functionality kubernetes adds on top of docker.

kubernetes uses a proxy on every host which knows about all the published
services.  The services share a port space (ie: every service has to have a
unique port assigned), and the proxies know where to forward requests to find
one of the backends for that service.

docker communicates parameters via environment variables and has a few
standard environment variables that are used for links to other containers.
Kubernetes also uses these link env variables but points them at the proxies
instead of directly to the other containers.  Since oslo.config can't look up
environment variables directly (that's something I'd like to add), I have a
simple shell one-liner that expands environment variables in the relevant
config files before starting the openstack server.

As a concrete example: I configure a keystone service in my kubernetes config
and in my static config files I use values like:

identity_uri = http://$ENV[KEYSTONE_PORT_5000_TCP_ADDR]:
$ENV[KEYSTONE_PORT_5000_TCP_PORT]/v2.0

docker/kubernetes sets those env variables to refer to the proxy on the local
host and the port number from my service config - this information is static
for the lifetime of that docker instance.  The proxy will reroute the requests
dynamically to wherever the actual instances are running right now.

I hope that's enough detail - I encourage you to read the kubernetes docs
since they have diagrams, etc that will make it much clearer than the above.

Really nice explanation.

+2!

Regards,
-steve


  - Gus


Maybe iptables rules? Maybe adding another bridge? Maybe just disabling the
docker network stack all together and binding the service to a fixed,
static address on the host?

Also, I ran across:
http://jperrin.github.io/centos/2014/09/25/centos-docker-and-systemd/ and
it does seem to work. I was able to get openssh-server and keystone to work
in the same container without needing to write custom start/stop scripts.
This kind of setup would make a nova compute container much, much easier.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Wednesday, October 01, 2014 8:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Kolla Blueprints

On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:

On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake
sd...@redhat.commailto:sd...@redhat.com wrote:

I've done a first round of prioritization.  I think key things we need
people to step up for are nova and rabbitmq containers.

For the developers, please take a moment to pick a specific blueprint to
work on.  If your already working on something, this hsould help to prevent
duplicate work :)


As I understand in the current implementations[1]  the containers are
configured with a mix of shell scripts using crudini and other shell
command. Is it the way to configure the containers? and is a deployment
tool like Ansible (or others) is something that is planned to be used in
the future?

Chmouel

Chmouel,

I am not really sure what the best solution to configure the containers.  It
is clear to me the current shell scripts are fragile in nature and do not
handle container restart properly.  The idea of using Puppet or Ansible as
a CM tool has been discussed with no resolution.  At the moment, I'm
satisified with a somewhat hacky solution if we can get the containers
operational.

Regards,
-steve




[1] from https://github.com/jlabocki/superhappyfunshow/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-03 Thread Vishvananda Ishaya
If you can add and delete interfaces it seems like being able to list them is 
useful. You can get the info you need from the networks list when you get the 
instance, but “nova interface-list” seems like a useful addition if we have 
“interface-attach” and “interface-detach”, so in this case I think I would 
suggest that we leave the proxying in and implement it for nova-network as well.

I was looking for an anologue with cinder but it looks like we have a 
volume-attach and volume-detach there without a volume-list server command so 
if people want to kill it I’m ok with that too.

Vish

On Oct 2, 2014, at 2:43 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 
 
 On 10/2/2014 4:34 PM, Vishvananda Ishaya wrote:
 os-attach-interfacees is actually a a forward port of:
 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py
 
 which is a compute action that is valid for both nova-network and neutron:
 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2991
 
 On Oct 2, 2014, at 1:57 PM, Matt Riedemann mrie...@linux.vnet.ibm.com 
 wrote:
 
 The os-interface (v2) and os-attach-interfaces (v3) APIs are only used for 
 the neutron network API, you'll get a NotImplemented if trying to call the 
 related methods with nova-network [1].
 
 Since we aren't proxying to neutron in the v3 API (v2.1), why does 
 os-attach-interfaces [2] exist?  Was this just an oversight?  If so, please 
 allow me to delete it. :)
 
 [1] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
 [2] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 OK so create/delete call the compute_api to attach/detach, but show and index 
 are calling the network_api on port methods which are neutron only, so I 
 guess that's what I'm talking about as far as removing. Personally I don't 
 think it hurts anything, but I'm getting mixed signals about the stance on 
 neutron proxying in the v2.1 API.
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2014-10-03 Thread Flavio Percoco
On 10/03/2014 05:38 PM, Tristan Cacqueray wrote:
 Candidate proposals for the Technical Committee positions (6 positions)
 are now open and will remain open until 05:59 UTC October 10, 2014.
 
 Candidates for the Technical Committee Positions: Any Foundation
 individual member can propose their candidacy for an available,
 directly-elected TC seat. [0] (except the seven TC members who were
 elected for a one-year seat last April: Thierry Carrez, Jay Pipes,
 Vishvananda Ishaya, Michael Still, Jim Blair, Mark McClain, Devananda
 van der Veen) [1]
 
 Propose your candidacy by sending an email to the openstack-dev at
 lists.openstack.org mailing-list, with the subject: TC candidacy.
 Please start your own thread so we have one thread per candidate. Since
 there will be many people voting for folks with whom they might not have
 worked, including a platform or statement to help voters make an
 informed choice is recommended, though not required.
 
 NEW: In order to help the electorate learn more about the candidates we
 have posted a template of questions to which all candidates are
 requested to respond. [2]
 
 Anita and I will confirm candidates with an email to the candidate
 thread as well as create a link to the confirmed candidate's proposal
 email on the wikipage for this election. [1]
 
 The election will be held from October 10 through to 13:00 UTC October
 17, 2014. The electorate are the Foundation individual members that are
 also committers for one of the official programs projects [3] over the
 Icehouse-Juno timeframe (September 26, 2013 06:00 UTC to September 26,
 2014 05:59 UTC), as well as the extra-ATCs who are acknowledged by the
 TC. [4]
 
 Please see the wikipage for additional details about this election. [1]
 
 If you have any questions please be sure to either voice them on the
 mailing list or email Anita or myself [5] or contact Anita or myself on IRC.
 
 Thank you, and I look forward to reading your candidate proposals,
 Tristan
 
 [0] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
 [1] https://wiki.openstack.org/wiki/TC_Elections_October_2014
 [2]
 https://wiki.openstack.org/wiki/TC_Elections_October_2014#TC_Election_Questions
 [3]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=sept-2014-elections
 Note the tag for this repo, sept-2014-elections.
 [4]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=sept-2014-elections
 [5] Anita: anteaya at anteaya dot info
  Tristan: tristan dot cacqueray at enovance dot com
 

Greetings,

I'd like to take this chance to gently ask, but it's obviously not a
requirement, to all candidates to share what their opinion with regards
to the new governance discussion is in their candidacy.

As a voter, I'm interested to know how the new candidates see our
governance model in the next 6 months and what changes, related to the
big tent discussion, they consider most important.

Thanks,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Vishvananda Ishaya
The proposal in the past was to keep quota enforcement local, but to
put the resource limits into keystone. This seems like an obvious first
step to me. Then a shared library for enforcing quotas with decent
performance should be next. The quota calls in nova are extremely
inefficient right now and it will only get worse when we try to add
hierarchical projects and quotas.

Vish

On Oct 3, 2014, at 7:53 AM, Duncan Thomas duncan.tho...@gmail.com wrote:

 Taking quota out of the service / adding remote calls for quota
 management is going to make things fragile - you've somehow got to
 deal with the cases where your quota manager is slow, goes away,
 hiccups, drops connections etc. You'll also need some way of
 reconciling actual usage against quota usage periodically, to detect
 problems.
 
 On 3 October 2014 15:03, Salvatore Orlando sorla...@nicira.com wrote:
 Hi,
 
 Quota management is currently one of those things where every openstack
 project does its own thing. While quotas are obviously managed in a similar
 way for each project, there are subtle differences which ultimately result
 in lack of usability.
 
 I recall that in the past there have been several calls for unifying quota
 management. The blueprint [1] for instance, hints at the possibility of
 storing quotas in keystone.
 On the other hand, the blazar project [2, 3] seems to aim at solving this
 problem for good enabling resource reservation and therefore potentially
 freeing openstack projects from managing and enforcing quotas.
 
 While Blazar is definetely a good thing to have, I'm not entirely sure we
 want to make it a required component for every deployment. Perhaps single
 projects should still be able to enforce quota. On the other hand, at least
 on paper, the idea of making Keystone THE endpoint for managing quotas,
 and then letting the various project enforce them, sounds promising - is
 there any reason for which this blueprint is stalled to the point that it
 seems forgotten now?
 
 I'm coming to the mailing list with these random questions about quota
 management, for two reasons:
 1) despite developing and using openstack on a daily basis I'm still
 confused by quotas
 2) I've found a race condition in neutron quotas and the fix is not trivial.
 So, rather than start coding right away, it might probably make more sense
 to ask the community if there is already a known better approach to quota
 management - and obviously enforcement.
 
 Thanks in advance,
 Salvatore
 
 [1] https://blueprints.launchpad.net/keystone/+spec/service-metadata
 [2] https://wiki.openstack.org/wiki/Blazar
 [3] https://review.openstack.org/#/q/project:stackforge/blazar,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Joe Gordon
On Fri, Oct 3, 2014 at 6:07 AM, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
 posts and mailing list threads related to changing our governance model to
 a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions,
 and look at how each proposal needs to be reflected in our current policies
 so we do not drop commitments we want to retain along with the processes we
 are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a
 series rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps
 won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1]
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


 After seeing Jay's idea of making a yaml file modeling things and talking
 to devananda about this I went ahead and tried to graph the relationships
 out.

 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png

 It turns out its really hard to figure out what the relationships are
 without digging deep into the code for each project, so I am sure I got a
 few things wrong (along with missing a lot of projects).


 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s why I’ve
 tried to describe general relationships (“optional inter-project
 dependences” vs. “strict co-dependent project groups” [1]) up until the
 very last patch in the series [2], which redefines the integrated release
 in terms of those other relationships and a base set of projects.


I agree the relationships are very important for gate structure and less so
for governance. I thought it would be nice to codify the relationships in a
machine readable format so we can do things with it, like try making
different rules and see how they would work.  For example we can already
make two groups of things that may be useful for testing:

* services that nothing depends on
* services that don't depend on other services

Latest graph: http://i.imgur.com/y8zmNIM.png


 Doug

 [1]
 https://review.openstack.org/#/c/125785/2/reference/project-testing-policies.rst
 [2] https://review.openstack.org/#/c/125789/


 -Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Salvatore Orlando
Thanks Vish,

this seems a very reasonable first step as well - and since most projects
would be enforcing quotas in the same way, the shared library would be the
logical next step.
After all this is quite the same thing we do with authZ.

Duncan is expressing valid concerns which in my opinion can be addressed
with an appropriate design - and a decent implementation.

Salvatore

On 3 October 2014 18:25, Vishvananda Ishaya vishvana...@gmail.com wrote:

 The proposal in the past was to keep quota enforcement local, but to
 put the resource limits into keystone. This seems like an obvious first
 step to me. Then a shared library for enforcing quotas with decent
 performance should be next. The quota calls in nova are extremely
 inefficient right now and it will only get worse when we try to add
 hierarchical projects and quotas.

 Vish

 On Oct 3, 2014, at 7:53 AM, Duncan Thomas duncan.tho...@gmail.com wrote:

  Taking quota out of the service / adding remote calls for quota
  management is going to make things fragile - you've somehow got to
  deal with the cases where your quota manager is slow, goes away,
  hiccups, drops connections etc. You'll also need some way of
  reconciling actual usage against quota usage periodically, to detect
  problems.
 
  On 3 October 2014 15:03, Salvatore Orlando sorla...@nicira.com wrote:
  Hi,
 
  Quota management is currently one of those things where every openstack
  project does its own thing. While quotas are obviously managed in a
 similar
  way for each project, there are subtle differences which ultimately
 result
  in lack of usability.
 
  I recall that in the past there have been several calls for unifying
 quota
  management. The blueprint [1] for instance, hints at the possibility of
  storing quotas in keystone.
  On the other hand, the blazar project [2, 3] seems to aim at solving
 this
  problem for good enabling resource reservation and therefore potentially
  freeing openstack projects from managing and enforcing quotas.
 
  While Blazar is definetely a good thing to have, I'm not entirely sure
 we
  want to make it a required component for every deployment. Perhaps
 single
  projects should still be able to enforce quota. On the other hand, at
 least
  on paper, the idea of making Keystone THE endpoint for managing
 quotas,
  and then letting the various project enforce them, sounds promising - is
  there any reason for which this blueprint is stalled to the point that
 it
  seems forgotten now?
 
  I'm coming to the mailing list with these random questions about quota
  management, for two reasons:
  1) despite developing and using openstack on a daily basis I'm still
  confused by quotas
  2) I've found a race condition in neutron quotas and the fix is not
 trivial.
  So, rather than start coding right away, it might probably make more
 sense
  to ask the community if there is already a known better approach to
 quota
  management - and obviously enforcement.
 
  Thanks in advance,
  Salvatore
 
  [1] https://blueprints.launchpad.net/keystone/+spec/service-metadata
  [2] https://wiki.openstack.org/wiki/Blazar
  [3] https://review.openstack.org/#/q/project:stackforge/blazar,n,z
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Duncan Thomas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Eoghan Glynn


- Original Message -
 
 
 On Fri, Oct 3, 2014 at 6:07 AM, Doug Hellmann  d...@doughellmann.com 
 wrote:
 
 
 
 
 On Oct 3, 2014, at 12:46 AM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 
 
 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com  wrote:
 
 
 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann  d...@doughellmann.com 
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
  posts and mailing list threads related to changing our governance model to
  a series of patches against the openstack/governance repository [1].
  
  I have tried to include all of the inputs, as well as my own opinions, and
  look at how each proposal needs to be reflected in our current policies so
  we do not drop commitments we want to retain along with the processes we
  are shedding [2].
  
  I am sure we need more discussion, so I have staged the changes as a series
  rather than one big patch. Please consider the patches together when
  commenting. There are many related changes, and some incremental steps
  won’t make sense without the changes that come after (hey, just like
  code!).
  
  Doug
  
  [1]
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes
 
 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)
 
 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
 After seeing Jay's idea of making a yaml file modeling things and talking to
 devananda about this I went ahead and tried to graph the relationships out.
 
 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
 It turns out its really hard to figure out what the relationships are without
 digging deep into the code for each project, so I am sure I got a few things
 wrong (along with missing a lot of projects).
 
 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s why I’ve
 tried to describe general relationships (“optional inter-project
 dependences” vs. “strict co-dependent project groups” [1]) up until the very
 last patch in the series [2], which redefines the integrated release in
 terms of those other relationships and a base set of projects.
 
 
 I agree the relationships are very important for gate structure and less so
 for governance. I thought it would be nice to codify the relationships in a
 machine readable format so we can do things with it, like try making
 different rules and see how they would work. For example we can already make
 two groups of things that may be useful for testing:
 
 * services that nothing depends on
 * services that don't depend on other services
 
 Latest graph: http://i.imgur.com/y8zmNIM.png

This diagram is missing any relationships for ceilometer.

Ceilometer calls APIs provided by:

 * keystone
 * nova
 * glance
 * neutron
 * swift

Ceilometer consumes notifications from:

 * keystone
 * nova
 * glance
 * neutron
 * cinder
 * ironic
 * heat
 * sahara

Ceilometer serves incoming API calls from:

 * heat
 * horizon

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Chris Dent

On Fri, 3 Oct 2014, Joe Gordon wrote:


* services that nothing depends on
* services that don't depend on other services

Latest graph: http://i.imgur.com/y8zmNIM.png


I'm hesitant to open this can but it's just lying there waiting,
wiggling like good bait, so:

How are you defining dependency in that picture?

For example:

Many of those services expect[1] to be able to send notifications (or
be polled by) ceilometer[2]. We've got an ongoing thread about the need
to contractualize notifications. Are those contracts (or the desire
for them) a form of dependency? Should they be?

[1] It's not that it is a strict requirement but lots of people involved
with the other projects contribute code to ceilometer or make
changes in their own[3] project specifically to send info to
ceilometer.

[2] I'm not trying to defend ceilometer from slings here, just point out
a good example, since it has _no_ arrows.

[3] their own, that's hateful, let's have less of that.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-03 Thread Neal, Phil
 From: Dina Belova [mailto:dbel...@mirantis.com]
 On Friday, October 03, 2014 2:53 AM
 
 Igor,
 
 Personally this idea looks really nice to me, as this will help to avoid
 strange code being merged and not found via reviewing process.
 
 Cheers,
 Dina
 
 On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov
 idegtia...@mirantis.com wrote:
 Hi folks!
 
 I try too guess do we need in ceilometer checking new patches for
 critical errors with pylint?
 
 As far as I know Nova and Sahara and others have such check. Actually
 it is not checking of all project but comparing of the number of
 errors without new patch and with it, and if diff is more then 0 then
 patch are not taken.

Looking a bit deeper it seems that Nova struggled with false positives and 
resorted to https://review.openstack.org/#/c/28754/ , which layers some 
historical checking of git on top of pylint's tendency to check only the latest 
commit. I can't say I'm too deeply versed in the code,  but it's enough to make 
me wonder if we want to go that direction and avoid the issues altogether?

 
 I have taken as pattern Sahara's solution and proposed a patch for
 ceilometer:
 https://review.openstack.org/#/c/125906/
 
 Cheers,
 Igor Degtiarov
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Best regards,
 Dina Belova
 Software Engineer
 Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Joe Gordon
On Fri, Oct 3, 2014 at 9:42 AM, Eoghan Glynn egl...@redhat.com wrote:



 - Original Message -
 
 
  On Fri, Oct 3, 2014 at 6:07 AM, Doug Hellmann  d...@doughellmann.com 
  wrote:
 
 
 
 
  On Oct 3, 2014, at 12:46 AM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 
 
  On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
  devananda@gmail.com  wrote:
 
 
  On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann  d...@doughellmann.com 
  wrote:
   As promised at this week’s TC meeting, I have applied the various blog
   posts and mailing list threads related to changing our governance
 model to
   a series of patches against the openstack/governance repository [1].
  
   I have tried to include all of the inputs, as well as my own opinions,
 and
   look at how each proposal needs to be reflected in our current
 policies so
   we do not drop commitments we want to retain along with the processes
 we
   are shedding [2].
  
   I am sure we need more discussion, so I have staged the changes as a
 series
   rather than one big patch. Please consider the patches together when
   commenting. There are many related changes, and some incremental steps
   won’t make sense without the changes that come after (hey, just like
   code!).
  
   Doug
  
   [1]
  
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
   [2] https://etherpad.openstack.org/p/big-tent-notes
 
  I've summed up a lot of my current thinking on this etherpad as well
  (I should really blog, but hey ...)
 
  https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
  After seeing Jay's idea of making a yaml file modeling things and
 talking to
  devananda about this I went ahead and tried to graph the relationships
 out.
 
  repo: https://github.com/jogo/graphing-openstack
  preliminary YAML file:
  https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
  sample graph: http://i.imgur.com/LwlkE73.png
  It turns out its really hard to figure out what the relationships are
 without
  digging deep into the code for each project, so I am sure I got a few
 things
  wrong (along with missing a lot of projects).
 
  The relationships are very important for setting up an optimal gate
  structure. I’m less convinced they are important for setting up the
  governance structure, and I do not think we want a specific gate
  configuration embedded in the governance structure at all. That’s why
 I’ve
  tried to describe general relationships (“optional inter-project
  dependences” vs. “strict co-dependent project groups” [1]) up until the
 very
  last patch in the series [2], which redefines the integrated release in
  terms of those other relationships and a base set of projects.
 
 
  I agree the relationships are very important for gate structure and less
 so
  for governance. I thought it would be nice to codify the relationships
 in a
  machine readable format so we can do things with it, like try making
  different rules and see how they would work. For example we can already
 make
  two groups of things that may be useful for testing:
 
  * services that nothing depends on
  * services that don't depend on other services
 
  Latest graph: http://i.imgur.com/y8zmNIM.png

 This diagram is missing any relationships for ceilometer.


It sure is, the graph is very much a work in progress.  Here is the yaml
that generates it
https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml want
to update that to includes ceilometer's relationships?



 Ceilometer calls APIs provided by:

  * keystone
  * nova
  * glance
  * neutron
  * swift

 Ceilometer consumes notifications from:

  * keystone
  * nova
  * glance
  * neutron
  * cinder
  * ironic
  * heat
  * sahara

 Ceilometer serves incoming API calls from:

  * heat
  * horizon

 Cheers,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 12:26 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 On Fri, Oct 3, 2014 at 6:07 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com wrote:
  As promised at this week’s TC meeting, I have applied the various blog 
  posts and mailing list threads related to changing our governance model to 
  a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions, and 
  look at how each proposal needs to be reflected in our current policies so 
  we do not drop commitments we want to retain along with the processes we 
  are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a 
  series rather than one big patch. Please consider the patches together 
  when commenting. There are many related changes, and some incremental 
  steps won’t make sense without the changes that come after (hey, just like 
  code!).
 
  Doug
 
  [1] 
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes
 
 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)
 
 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
 After seeing Jay's idea of making a yaml file modeling things and talking to 
 devananda about this I went ahead and tried to graph the relationships out.
 
 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file: 
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
  
 It turns out its really hard to figure out what the relationships are 
 without digging deep into the code for each project, so I am sure I got a 
 few things wrong (along with missing a lot of projects).
 
 The relationships are very important for setting up an optimal gate 
 structure. I’m less convinced they are important for setting up the 
 governance structure, and I do not think we want a specific gate 
 configuration embedded in the governance structure at all. That’s why I’ve 
 tried to describe general relationships (“optional inter-project dependences” 
 vs. “strict co-dependent project groups” [1]) up until the very last patch in 
 the series [2], which redefines the integrated release in terms of those 
 other relationships and a base set of projects.
 
 
 I agree the relationships are very important for gate structure and less so 
 for governance. I thought it would be nice to codify the relationships in a 
 machine readable format so we can do things with it, like try making 
 different rules and see how they would work.  For example we can already make 
 two groups of things that may be useful for testing:
 
 * services that nothing depends on
 * services that don't depend on other services
 
 Latest graph: http://i.imgur.com/y8zmNIM.png

Absolutely. I love these sorts of dependency graphs, and used them extensively 
when working out library relationships before we started graduations in juno.

Doug

  
 Doug
 
 [1] 
 https://review.openstack.org/#/c/125785/2/reference/project-testing-policies.rst
 [2] https://review.openstack.org/#/c/125789/
 
 
 -Deva
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Joe Gordon
On Fri, Oct 3, 2014 at 9:51 AM, Chris Dent chd...@redhat.com wrote:

 On Fri, 3 Oct 2014, Joe Gordon wrote:

  * services that nothing depends on
 * services that don't depend on other services

 Latest graph: http://i.imgur.com/y8zmNIM.png


 I'm hesitant to open this can but it's just lying there waiting,
 wiggling like good bait, so:

 How are you defining dependency in that picture?


data is coming from here:
https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
and the key is here: https://github.com/jogo/graphing-openstack

Note ceilometer has no relationships because I wasn't sure what exactly
they were (which were required and which are optional etc.), not because
there are none. It turns out its not easy to find this information in an
easily digestible format.


 For example:

 Many of those services expect[1] to be able to send notifications (or
 be polled by) ceilometer[2]. We've got an ongoing thread about the need
 to contractualize notifications. Are those contracts (or the desire
 for them) a form of dependency? Should they be?


So in the case of notifications, I think that is a Ceilometer CAN-USE Nova
THROUGH notifications




 [1] It's not that it is a strict requirement but lots of people involved
 with the other projects contribute code to ceilometer or make
 changes in their own[3] project specifically to send info to
 ceilometer.

 [2] I'm not trying to defend ceilometer from slings here, just point out
 a good example, since it has _no_ arrows.

 [3] their own, that's hateful, let's have less of that.

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-03 Thread Doug Hellmann

On Oct 3, 2014, at 1:09 PM, Neal, Phil phil.n...@hp.com wrote:

 From: Dina Belova [mailto:dbel...@mirantis.com]
 On Friday, October 03, 2014 2:53 AM
 
 Igor,
 
 Personally this idea looks really nice to me, as this will help to avoid
 strange code being merged and not found via reviewing process.
 
 Cheers,
 Dina
 
 On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov
 idegtia...@mirantis.com wrote:
 Hi folks!
 
 I try too guess do we need in ceilometer checking new patches for
 critical errors with pylint?
 
 As far as I know Nova and Sahara and others have such check. Actually
 it is not checking of all project but comparing of the number of
 errors without new patch and with it, and if diff is more then 0 then
 patch are not taken.
 
 Looking a bit deeper it seems that Nova struggled with false positives and 
 resorted to https://review.openstack.org/#/c/28754/ , which layers some 
 historical checking of git on top of pylint's tendency to check only the 
 latest commit. I can't say I'm too deeply versed in the code,  but it's 
 enough to make me wonder if we want to go that direction and avoid the issues 
 altogether?

I haven’t looked at it in a while, but I’ve never been particularly excited by 
pylint. It’s extremely picky, encourages enforcing some questionable rules 
(arbitrary limits on variable name length?), and repots a lot of false 
positives. That combination tends to result in making writing software annoying 
without helping with quality in any real way.

Doug

 
 
 I have taken as pattern Sahara's solution and proposed a patch for
 ceilometer:
 https://review.openstack.org/#/c/125906/
 
 Cheers,
 Igor Degtiarov
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Best regards,
 Dina Belova
 Software Engineer
 Mirantis Inc.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Chris Dent

On Fri, 3 Oct 2014, Joe Gordon wrote:


data is coming from here:
https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
and the key is here: https://github.com/jogo/graphing-openstack


Cool, thanks.


Many of those services expect[1] to be able to send notifications (or
be polled by) ceilometer[2]. We've got an ongoing thread about the need
to contractualize notifications. Are those contracts (or the desire
for them) a form of dependency? Should they be?



So in the case of notifications, I think that is a Ceilometer CAN-USE Nova
THROUGH notifications


Your statement here is part of the reason I asked. I think it is
possible to argue that the dependency has the opposite order: Nova might
like to use Ceilometer to keep metrics via notifications or perhaps:
Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.

This is perhaps not the strict technological representation of the
dependency, but it represents the sort of pseudo-social
relationships between projects: Nova desires for Ceilometer (or at
least something doing telemetry) to exist.

Ceilometer itself is^wshould be agnostic about what sort of metrics are
coming its way. It should accept them, potentially transform them, store
them, and make them available for later use (including immediately). It
doesn't^wshouldn't really care if Nova exists or not.

There are probably lots of other relationships of this form between
other services, thus the question: Is a use-of-notifications
something worth tracking? I would say yes.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-03 Thread Morgan Fainberg
Keeping the enforcement local (same way policy works today) helps limit the
fragility, big +1 there.

I also agree with Vish, we need a uniform way to talk about quota
enforcement similar to how we have a uniform policy language / enforcement
model (yes I know it's not perfect, but it's far closer to uniform than
quota management is).

If there is still interest of placing quota in keystone, let's talk about
how that will work and what will be needed from Keystone . The previous
attempt didn't get much traction and stalled out early in implementation.
If we want to revisit this lets make sure we have the resources needed and
spec(s) in progress / info on etherpads (similar to how the multitenancy
stuff was handled at the last summit) as early as possible.

Cheers,
Morgan

Sent via mobile

On Friday, October 3, 2014, Salvatore Orlando sorla...@nicira.com wrote:

 Thanks Vish,

 this seems a very reasonable first step as well - and since most projects
 would be enforcing quotas in the same way, the shared library would be the
 logical next step.
 After all this is quite the same thing we do with authZ.

 Duncan is expressing valid concerns which in my opinion can be addressed
 with an appropriate design - and a decent implementation.

 Salvatore

 On 3 October 2014 18:25, Vishvananda Ishaya vishvana...@gmail.com
 javascript:_e(%7B%7D,'cvml','vishvana...@gmail.com'); wrote:

 The proposal in the past was to keep quota enforcement local, but to
 put the resource limits into keystone. This seems like an obvious first
 step to me. Then a shared library for enforcing quotas with decent
 performance should be next. The quota calls in nova are extremely
 inefficient right now and it will only get worse when we try to add
 hierarchical projects and quotas.

 Vish

 On Oct 3, 2014, at 7:53 AM, Duncan Thomas duncan.tho...@gmail.com
 javascript:_e(%7B%7D,'cvml','duncan.tho...@gmail.com'); wrote:

  Taking quota out of the service / adding remote calls for quota
  management is going to make things fragile - you've somehow got to
  deal with the cases where your quota manager is slow, goes away,
  hiccups, drops connections etc. You'll also need some way of
  reconciling actual usage against quota usage periodically, to detect
  problems.
 
  On 3 October 2014 15:03, Salvatore Orlando sorla...@nicira.com
 javascript:_e(%7B%7D,'cvml','sorla...@nicira.com'); wrote:
  Hi,
 
  Quota management is currently one of those things where every openstack
  project does its own thing. While quotas are obviously managed in a
 similar
  way for each project, there are subtle differences which ultimately
 result
  in lack of usability.
 
  I recall that in the past there have been several calls for unifying
 quota
  management. The blueprint [1] for instance, hints at the possibility of
  storing quotas in keystone.
  On the other hand, the blazar project [2, 3] seems to aim at solving
 this
  problem for good enabling resource reservation and therefore
 potentially
  freeing openstack projects from managing and enforcing quotas.
 
  While Blazar is definetely a good thing to have, I'm not entirely sure
 we
  want to make it a required component for every deployment. Perhaps
 single
  projects should still be able to enforce quota. On the other hand, at
 least
  on paper, the idea of making Keystone THE endpoint for managing
 quotas,
  and then letting the various project enforce them, sounds promising -
 is
  there any reason for which this blueprint is stalled to the point that
 it
  seems forgotten now?
 
  I'm coming to the mailing list with these random questions about quota
  management, for two reasons:
  1) despite developing and using openstack on a daily basis I'm still
  confused by quotas
  2) I've found a race condition in neutron quotas and the fix is not
 trivial.
  So, rather than start coding right away, it might probably make more
 sense
  to ask the community if there is already a known better approach to
 quota
  management - and obviously enforcement.
 
  Thanks in advance,
  Salvatore
 
  [1] https://blueprints.launchpad.net/keystone/+spec/service-metadata
  [2] https://wiki.openstack.org/wiki/Blazar
  [3] https://review.openstack.org/#/q/project:stackforge/blazar,n,z
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Duncan Thomas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 

Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Chris Friesen

On 10/03/2014 11:38 AM, Chris Dent wrote:

On Fri, 3 Oct 2014, Joe Gordon wrote:



Many of those services expect[1] to be able to send notifications (or
be polled by) ceilometer[2]. We've got an ongoing thread about the need
to contractualize notifications. Are those contracts (or the desire
for them) a form of dependency? Should they be?



So in the case of notifications, I think that is a Ceilometer CAN-USE
Nova
THROUGH notifications


Your statement here is part of the reason I asked. I think it is
possible to argue that the dependency has the opposite order: Nova might
like to use Ceilometer to keep metrics via notifications or perhaps:
Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.

This is perhaps not the strict technological representation of the
dependency, but it represents the sort of pseudo-social
relationships between projects: Nova desires for Ceilometer (or at
least something doing telemetry) to exist.


This may be quibbling, but I would suggest that it is the end-user that 
may want something doing telemetry to exist.


Nova proper doesn't really care about telemetry.  Nova exports telemetry 
because end-users want the information to be available for use by other 
services.  Nova itself doesn't actually make use of it or call out to 
services that make use of it.


Now something like Heat really depends on telemetry.  It wants to know 
if an instance didn't kick the watchdog timer, or if the webserver keeps 
crashing, or other information provided by telemetry.



Ceilometer itself is^wshould be agnostic about what sort of metrics are
coming its way. It should accept them, potentially transform them, store
them, and make them available for later use (including immediately). It
doesn't^wshouldn't really care if Nova exists or not.

There are probably lots of other relationships of this form between
other services, thus the question: Is a use-of-notifications
something worth tracking? I would say yes.


Sure, because those notifications are a form of API and those should be 
versioned and tracked appropriately. Nova can't arbitrarily change the 
format of the notifications it sends out because that would break anyone 
that cares about the contents of those notifications.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 11:18 AM, Chris Friesen
chris.frie...@windriver.com wrote:
 On 10/03/2014 11:38 AM, Chris Dent wrote:

 On Fri, 3 Oct 2014, Joe Gordon wrote:


 Many of those services expect[1] to be able to send notifications (or
 be polled by) ceilometer[2]. We've got an ongoing thread about the need
 to contractualize notifications. Are those contracts (or the desire
 for them) a form of dependency? Should they be?


 So in the case of notifications, I think that is a Ceilometer CAN-USE
 Nova
 THROUGH notifications


 Your statement here is part of the reason I asked. I think it is
 possible to argue that the dependency has the opposite order: Nova might
 like to use Ceilometer to keep metrics via notifications or perhaps:
 Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.

 This is perhaps not the strict technological representation of the
 dependency, but it represents the sort of pseudo-social
 relationships between projects: Nova desires for Ceilometer (or at
 least something doing telemetry) to exist.


 This may be quibbling, but I would suggest that it is the end-user that may
 want something doing telemetry to exist.

 Nova proper doesn't really care about telemetry.  Nova exports telemetry
 because end-users want the information to be available for use by other
 services.  Nova itself doesn't actually make use of it or call out to
 services that make use of it.

Yup.

 Now something like Heat really depends on telemetry.  It wants to know if an
 instance didn't kick the watchdog timer, or if the webserver keeps crashing,
 or other information provided by telemetry.


Now that is a really good point. /me tosses up a pull request for that change.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Devananda van der Veen
So a bit of background here. This began from thinking about functional
dependencies, and pondering whether a map of the dependency graph of
our projects could inform our gating structure, specifically to
encourage (or dare I say, actually force) all of us (the project
teams) to become more cognizant of the API contracts we are making
with each other, and the pain it causes when we break those contracts.

Let's not extend this exercise to a gigantic
everything-needs-everything-to-do-everything picture, which is where
it's heading now. Sure, telemetry is important for operators, and in
no way am I saying anything else when I say: for Nova to fulfill its
primary goal, telemetry is not necessary. It is optional. Desired, but
optional.

Even saying nova CAN-USE ceilometer is incorrect, though, since Nova
isn't actually using Ceilometer to accomplish any functional task
within it's domain. More correct would be to say Ceilometer
CAN-ACCEPT notifications FROM Nova. Incidentally, this is very
similar to the description of Heat and Horizon, except that, instead
of consuming a public API, Ceilometer is consuming something else
(internal notifications).

-Deva

On Fri, Oct 3, 2014 at 10:38 AM, Chris Dent chd...@redhat.com wrote:
 On Fri, 3 Oct 2014, Joe Gordon wrote:

 data is coming from here:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 and the key is here: https://github.com/jogo/graphing-openstack


 Cool, thanks.

 Many of those services expect[1] to be able to send notifications (or
 be polled by) ceilometer[2]. We've got an ongoing thread about the need
 to contractualize notifications. Are those contracts (or the desire
 for them) a form of dependency? Should they be?


 So in the case of notifications, I think that is a Ceilometer CAN-USE Nova
 THROUGH notifications


 Your statement here is part of the reason I asked. I think it is
 possible to argue that the dependency has the opposite order: Nova might
 like to use Ceilometer to keep metrics via notifications or perhaps:
 Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.

 This is perhaps not the strict technological representation of the
 dependency, but it represents the sort of pseudo-social
 relationships between projects: Nova desires for Ceilometer (or at
 least something doing telemetry) to exist.

 Ceilometer itself is^wshould be agnostic about what sort of metrics are
 coming its way. It should accept them, potentially transform them, store
 them, and make them available for later use (including immediately). It
 doesn't^wshouldn't really care if Nova exists or not.

 There are probably lots of other relationships of this form between
 other services, thus the question: Is a use-of-notifications
 something worth tracking? I would say yes.


 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 6:25 AM, Anne Gentle a...@openstack.org wrote:


 On Fri, Oct 3, 2014 at 8:07 AM, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen
 devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
  posts and mailing list threads related to changing our governance model 
  to a
  series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions,
  and look at how each proposal needs to be reflected in our current 
  policies
  so we do not drop commitments we want to retain along with the processes 
  we
  are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a
  series rather than one big patch. Please consider the patches together 
  when
  commenting. There are many related changes, and some incremental steps 
  won’t
  make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1]
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


 After seeing Jay's idea of making a yaml file modeling things and talking
 to devananda about this I went ahead and tried to graph the relationships
 out.

 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png

 It turns out its really hard to figure out what the relationships are
 without digging deep into the code for each project, so I am sure I got a
 few things wrong (along with missing a lot of projects).


 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s why I’ve
 tried to describe general relationships (“optional inter-project
 dependences” vs. “strict co-dependent project groups” [1]) up until the very
 last patch in the series [2], which redefines the integrated release in
 terms of those other relationships and a base set of projects.


 I'm reading and reading and reading and my thoughts keep returning to,
 we're optimizing only for dev. :)

 I need to either get over that or decide what parts need tweaking for docs
 and support optimization. I'll get going on reviews -- thanks a bunch for
 all this compilation and for the good blog writing. Much appreciated.


Actually, while it looks like we're optimizing for dev, I'm not sure
that's actually what we're doing. But also, the patches Doug proposed
didn't capture several things I'm thinking about, so it's possible no
one else knows that yet. I've commented in gerrit, but I'll try to
also clarify it a bit here.

==
(copying some comments from https://review.openstack.org/#/c/125787/2 )

I expressly do NOT want official project/team status to in any way
indicate that the horizontal teams (docs, qa, etc) suddenly need to
start supporting those projects. That will not scale, as we're already
seeing.

I believe we need to let in lots of code and lots of teams of people
who produce such code into the big tent of OpenStack, without
burgeoning the Integrated Release by cogating all of that, or
requiring the horizontal-scaling-efforts (docs, qa, etc) to take
responsibility for them, and also without indicating (whether
explicitly or implicitly) that those teams are The One True Way within
our community to do the thing they do.

==

How does this relate to the gate structure changes that Monty and I
came up with two days ago?

When a change in one project breaks another project, because of poorly
defined API contracts, non-documented expected behaviors, and so on,
which we see all the time today, it's because we (developers) are
hitting some of the same problems that our users would hit when they
use these services. Right now, we're co-gating everything, so we are
able to gloss over these problems (to a degree, but not entirely). It
will be somewhat painful when we stop doing that. Projects will land
changes that break other projects' gates, which is not OK, and we'll
realize where a lot of unintended / undocumented inter-dependencies
are, and we'll have to make stable APIs around that. I think this is
actually very closely in line with what Jay's been proposing as well
(though I'm not yet sold on the tags part).

-Deva

___
OpenStack-dev 

Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 6:47 AM, Sean Dague s...@dague.net wrote:
 On 10/03/2014 09:25 AM, Anne Gentle wrote:


 On Fri, Oct 3, 2014 at 8:07 AM, Doug Hellmann d...@doughellmann.com
 mailto:d...@doughellmann.com wrote:


 On Oct 3, 2014, at 12:46 AM, Joe Gordon joe.gord...@gmail.com
 mailto:joe.gord...@gmail.com wrote:


 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen
 devananda@gmail.com mailto:devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann
 d...@doughellmann.com mailto:d...@doughellmann.com wrote:
  As promised at this week’s TC meeting, I have applied the various 
 blog posts and mailing list threads related to changing our governance 
 model to a series of patches against the openstack/governance repository 
 [1].
 
  I have tried to include all of the inputs, as well as my own 
 opinions, and look at how each proposal needs to be reflected in our 
 current policies so we do not drop commitments we want to retain along with 
 the processes we are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes 
 as a series rather than one big patch. Please consider the patches together 
 when commenting. There are many related changes, and some incremental steps 
 won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1] 
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad
 as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


 After seeing Jay's idea of making a yaml file modeling things and
 talking to devananda about this I went ahead and tried to graph
 the relationships out.

 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML
 file: 
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png

 It turns out its really hard to figure out what the relationships
 are without digging deep into the code for each project, so I am
 sure I got a few things wrong (along with missing a lot of projects).

 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s
 why I’ve tried to describe general relationships (“optional
 inter-project dependences” vs. “strict co-dependent project groups”
 [1]) up until the very last patch in the series [2], which redefines
 the integrated release in terms of those other relationships and a
 base set of projects.


 I'm reading and reading and reading and my thoughts keep returning to,
 we're optimizing only for dev. :)

 I need to either get over that or decide what parts need tweaking for
 docs and support optimization. I'll get going on reviews -- thanks a
 bunch for all this compilation and for the good blog writing. Much
 appreciated.

 The relationships are also quite important for deployment units, because
 we're talking about what minimal set of things we're going to say work
 together. And we're going to be dictating the minimum lock step upgrade
 unit.


Yes

Integration in this sense (being part of that co-dependent group) is
actually a burden both on projects and on developers. Want to upgrade
Nova? Crap. I have to upgrade these other things too. Want to upgrade
Swift? Sure, that's easy just upgrade that one thing.


 Any project that fully stands on it's own (like Swift or Ironic, given
 that keystone is optional) can be stood up on their own. Ok, they go in
 one bucket and you call tell people, you want this function, just
 install this project, it's a vertical on it's own. Heat works quite well
 against a compute stack you don't run yourself (teams doing this in HP
 all the time). I expect Zaqar to be like Swift, and be a thing you can
 just have.

 That's not the case for the compute stack, for better or worse. And,
 based on the User Surveys, the compute stack is what most people are
 trying to get out of OpenStack right now. So we should unconfuse that,
 create a smaller basic building block (that people can understand), and
 provide guidance on how you could expand your function with our vast
 array of great expansion sets.

++



 OpenStack is enough parts that you can mix and match as much as you
 want, but much like the 600 config options in Nova, we really can't
 document every combination of things.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Glance] Let's deprecate the GridFS store (Fwd: [Openstack] [Glance] GridFS Store: Anyone using it?)

2014-10-03 Thread Nikhil Komawar
Thanks for bringing this up Flavio! Let's wait another week to get some 
feedback. Also, we can discuss the fate of this driver in our weekly meeting to 
be on safe side.

Thanks,
-Nikhil


From: Flavio Percoco [fla...@redhat.com]
Sent: Friday, October 03, 2014 4:09 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance] Let's deprecate the GridFS store (Fwd: 
[Openstack] [Glance] GridFS Store: Anyone using it?)

Greetings,

I've sent the email below to the OpenStack users mailing list trying to
get feedback from users on what to do with the GridFS driver.

We haven't had bugs filed on this driver nor users have provided any
feedback. This all leads me to think that it's actually not being used
at all. With that in mind, I propose marking this driver as deprecated
in Kilo.

I wonder if we could simply remove it based on the above _assumption_

Cheers,
Flavio


 Original Message 
Subject: [Openstack] [Glance] GridFS Store: Anyone using it?
Date: Fri, 26 Sep 2014 09:56:52 +0200
From: Flavio Percoco fla...@redhat.com
Reply-To: fla...@redhat.com
Organization: Red Hat
To: openst...@lists.openstack.org

Greetings,

I'm reaching out to see if anyone is using Glance's GridFS store driver
and has any feedback. The driver has been updated with the latest API
changes but no fixes or feedback has been received.

If no one is using it, I'll propose removing it for the Kilo release.
Drivers require maintenance, hence lots of time.

Cheers,
Flavio

--
@flaper87
Flavio Percoco

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Henry Gessau
There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
These bugs are quite important for IPv6 users and therefore I would like to
lobby for getting them into a possible RC2 of Neutron Juno.

These are low-risk fixes that would not jeopardize the stability of Neutron.

1. Network:dhcp port is not assigned EUI64 IPv6 address for SLAAC subnet
Bug: https://bugs.launchpad.net/neutron/+bug/1330826
Fix: https://review.openstack.org/101433

2. Cannot set only one of IPv6 attributes while second is None
Bug: https://bugs.launchpad.net/neutron/+bug/1363064
Fix: https://review.openstack.org/117799

The third one will probably not be allowed since it requires a string freeze
exception, but I will mention it anyway ...

3. IPv6 slaac is broken when subnet is less than /64
Bug: https://bugs.launchpad.net/neutron/+bug/1357084
Fix: https://review.openstack.org/116525

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 9:05 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 10/03/2014 10:06 AM, Chris Dent wrote:

 On Fri, 3 Oct 2014, Anne Gentle wrote:

 I'm reading and reading and reading and my thoughts keep returning to,
 we're optimizing only for dev. :)


 Yes, +many.


 plus infinity.

 In my reading it seems like we are trying to optimize the process for
 developers which is exactly the opposite of what we want to be doing if
 we want to address the perceived quality problems that we see. We should
 be optimizing for the various user groups (which, I admit, have been
 identified pretty well in some of the blog posts).


 Precisely.

 This would, of course, mean enhancing the docs (and other cross
 project) process...


 Yep! This is something I harped on quite a bit in my second blog post.

 At the moment we're trying to create governance structures that
 incrementally improve the existing model for how development is
 being done.

 I think we should consider more radical changes, changes which allow
 us to work on what users want: an OpenStack that works.


 Yay!

This ++

Also, I think what I've proposed does this. Because, as Sean already
pointed out, our user surveys still say that most users are trying to
get compute+identity+images+network when they try to get an OpenStack
that works. Tightening the focus of the horizontally-scaling teams
(qa, docs) to focus on that for a bit will help deliver a better
experience to users.

Expecting those horizontal teams to scale to accomodate every project
under the big-tent umbrella is ludicrous, and I don't think anyone's
expecting that to actually work out. I would much rather make it very
clear to projects which are outside of what I keep wanting to call
ring compute that they need to do their own docs, qa, etc - but they
should be following the same workflow patterns and participating with
the qa and doc teams working on ring compute. That collaboration
makes them part of OpenStack, regardless of whether they co-gate with
Nova or release on the same cadence as Nova.



 To do that I think we need to figure out two things:

 * how to fail faster
 * how to stop thinking of ourselves as being on particular projects

 More yay!




 I got hired to work on telemetry, but I've managed to do most of my
 work in QA related things because what's the point of making new
 stuff if you can't test it reliably? What I'd really like to say my
 job is is making OpenStack the best it possibly can be.

 If we keep focusing on the various services as entangled but separate
 and competing interests rather than on how to make OpenStack good, we're
 missing the point and the boat.

That's not the point in my head, so let me clarify:

I'm essentially saying this:

Let's STOP co-gating the world, and only co-gate on non-optional
functional dependency boundaries between projects, when those projects
choose to co-gate with each other because the developers trust each
other.

By running a project's functional tests only against changes in that
project, and encouraging (but leaving it up to the projects to decide
whether they mutually run) two-service integration tests where there
is an actual functional dependency between two projects, means each
project actually has to unwrap its assumptions about the other
project's behavior. We'll quickly learn where there are unstated
assumptions about behavior as projects break other projects which they
aren't co-gating with, and from that, we'll start shoring up those
pain points for our users.


 Our job as developers is to make things easier (or at least
 possible) for the people who use the stuff we build. Naturally we
 want to make that as frictionless as possible, but not at the cost
 of the people's ease.

 There are many perverse incentives in OpenStack's culture which
 encourage people to hoard. For example it is useful to keep code in
 one's own team's repository because the BPs, reviews and bugs which
 reflect on that repository reflect on the value of the team.

 Who is that good for?

 So much of the talk is about trying to figure out how to make the
 gate more resilient. No! How about we listen to what the gate is
 telling us: Our code is full of race conditions, a memory pig,
 poorly defined contracts, and just downright tediously slow and
 heavy. And _fix that_.


 I think it's important to discuss both things: the gate structure (and
 various inefficiencies the gate *policies* create within the system) as well
 as the fragility and design of the code bases themselves.

++


 What I think we need to do to improve is enhance the granularity at
 which someone can participate. Smaller repos, smaller teams, cleaner
 boundaries between things. Disconnected (and rolling) release cycles.
 Achieve fail fast by testing in (much) smaller lumps before code
 ever reaches the global CI. You know: make better tests locally that
 confirm good boundaries. Don't run integration tests until the unit,
 pep8 and in-tree functional tests have 

Re: [openstack-dev] [Openstack-docs] Contributing to docs without Docbook -- YES you can!

2014-10-03 Thread Stefano Maffulli
hi Nick,

On 09/29/2014 02:06 PM, Nicholas Chase wrote:
 Because we know that the networking documentation needs particular
 attention, we're starting there.  We have a Networking Guide, from which
 we will ultimately pull information to improve the networking section of
 the admin guide.  

I love experiments and I appreciate your effort to improve the
situation. It's not clear to me what the experiment wants to demonstrate
and I'd appreciate more details.

 The preliminary Table of Contents is here: 
 https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
 instructions for contributing are as follows:

This is cool and I see there is a blueprint also assigned
https://blueprints.launchpad.net/openstack-manuals/+spec/create-networking-guide

  1. Pick an existing topic or create a new topic. For new topics, we're
 primarily interested in deployment scenarios.
  2. Develop content (text and/or diagrams) in a format that supports at
 least basic markup (e.g., titles, paragraphs, lists, etc.).
  3. Provide a link to the content (e.g., gist on github.com, wiki page,
 blog post, etc.) under the associated topic.

Points 1-3 seem to be oriented at removing Launchpad from the equation.
Is that all there is? I guess it makes sense to remove obstacles,
although editing the wiki (since it requires a launchpad account anyway)
may not be the best way to track progress and see assignments.

  4. Send e-mail to reviewers network...@openstacknow.com.

Why not use the docs mailing list or other facilities on @openstack.org?
Who is responding to that address?

  5. A writer turns the content into an actual patch, with tracking bug,
 and docs reviewers (and the original author, we would hope) make
 sure it gets reviewed and merged.

This is puzzling: initially I thought that you had some experimental
magic software that would monitor edits to the wiki TOC page, go grab
html content from gist, blog post, etc, transform that into docbook or
something similar and magically create a task on LP for a doc writer to
touch up and send for review.

My understanding is that the Docs team has been using bug reports on
Launchpad to receive contributions and a writer would pick them from the
list, taking care of the transformation to docbook and gerrit workflow.

Point 5. makes the experiment look like the process already in place,
only using a wiki page first (instead of a blueprint first) and a
private email address instead of a public bug tracker.

Have I got it wrong? Can you explain a bit more why this experiment is
not using the existing process? What is the experiment trying to
demonstrate?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-03 Thread Jason Rist
On 10/01/2014 08:22 PM, Tristan Cacqueray wrote:
 Hello Cinder and TripleO contributors,

 Just a quick reminder that elections are closing soon, if you haven't
 already you should use your right to vote and pick your favourite candidate!

 Thanks for your time!
 Tristan



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Tristan - I'm not a super heavy contributor, but I think I have a few 
commits.  I didn't ever get an email from the voting provider?

Thanks,
Jason

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Juno RC1 available

2014-10-03 Thread Devananda van der Veen
(shamelessly copying Thierry's email template)

Hello everyone,

Ironic just published its first Juno release candidate.
The list of fixed bugs and the RC1 tarball are available at:
https://launchpad.net/ironic/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.2 final
version on October 16. You are therefore strongly encouraged to test and
validate this tarball !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/ironic/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ironic/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Ironic is now open for Kilo
development, and feature freeze restrictions no longer apply there.

Regards,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Veiga, Anthony
On 10/3/14, 14:58 , Henry Gessau ges...@cisco.com wrote:

There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
These bugs are quite important for IPv6 users and therefore I would like
to
lobby for getting them into a possible RC2 of Neutron Juno.

These are low-risk fixes that would not jeopardize the stability of
Neutron.

1. Network:dhcp port is not assigned EUI64 IPv6 address for SLAAC subnet
Bug: https://bugs.launchpad.net/neutron/+bug/1330826
Fix: https://review.openstack.org/101433

2. Cannot set only one of IPv6 attributes while second is None
Bug: https://bugs.launchpad.net/neutron/+bug/1363064
Fix: https://review.openstack.org/117799

The third one will probably not be allowed since it requires a string
freeze
exception, but I will mention it anyway ...

3. IPv6 slaac is broken when subnet is less than /64
Bug: https://bugs.launchpad.net/neutron/+bug/1357084
Fix: https://review.openstack.org/116525

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I¹ll second this notion. #2 is particularly important, as several modes
like provider networking are going to be broken without this.
-Anthony


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0028] Nova leaks compute host SMBIOS serial number to guests

2014-10-03 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nova leaks compute host SMBIOS serial number to guests
- ---

### Summary ###
When Nova is using the libvirt virtualization driver, the SMBIOS
serial number supplied by libvirt is provided to the guest instances
that are running on a compute node. This serial number may expose
sensitive information about the underlying compute node hardware.

### Affected Services / Software ###
Nova, Icehouse, Havana

### Discussion ###
The 'serial' field in guest SMBIOS tables gets populated based on the
libvirt reported UUID of the host hardware. The rationale is to allow
correlation of guests running on the same host.

Unfortunately some hardware vendors use a subset of the host UUID as a
key for retrieving hardware support contract information without
requiring any authentication. In these cases, exposing the host UUID to
the guest is an information leak for those vendors.

The exposed host UUID could theoretically be leveraged by a cloud user
to get an approximate count of the number of unique hosts available to
them in the cloud by launching many short lived VMs.

### Recommended Actions ###
It is possible to override the use of the compute node's SMBIOS data by
libvirt in /etc/libvirt/libvirtd.conf by setting the 'host_uuid'
parameter. This allows setting an arbitrary UUID for identification
purposes that doesn't leak any information about the real underlying
hardware.  It is advised to make use of this override ability to prevent
potential exposure of information about the underlying compute node
hardware.

In the Juno release of OpenStack, Nova's libvirt driver allows the
source of the host UUID to be controlled via a new 'sysinfo_serial'
config parameter. This new parameter allows the following values:

  - 'auto' - try /etc/machine-id, fallback to libvirt reported
 host UUID (new default)
  - 'hardware' - always use libvirt host UUID (old default)
  - 'os' - always use /etc/machine-id, error if missing
  - 'none' - do not report any value to the guest

In general, it is preferrable to use the /etc/machine-id UUID instead
of the host hardware UUID. The former is a recent standard for Linux
distros introduced by systemd to provide a UUID that is unique per
operating system install. This means that even containers will see a
separate /etc/machine-id value. This /etc/machine-id can be expected to
be widely available in current and future distros. If this file is
missing, it is still possible to fallback to the libvirt reported host
UUID.

Administrators concerned about exposing the ability to identity an
underlying compute node by it's serial number may wish to disable
reporting of any sysinfo serial field at all by using the 'none' value.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0028
Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1337349
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJULvb7AAoJEJa+6E7Ri+EVJeUH/01GXn1cV7RHqzh1z9ybsJnY
4Cw5OYzjsSOjmkC1t4Y5llx0aSYCpF3CGdXUaN/fOIpn/yqcbzbq4lXt6rLWW4NI
k9NFgOxbqQKFhKUQ6HQZ8jaIhZm2FLzxk+9eV73DlE5kZ8y8o9T/IkmZbRFeWsx2
uzPTQy9P2BJ95XnpoKcsUJBY/3M+8++f6xRj0sU66KZNSjW7xN7MnalrRtwRxIcD
uugXv3iQ+e2ijXZvERw4NQonzSD+fcxBICxW0lUJrejnDn9ZfcJ4MmOGRYuN9sRC
Fr4lstLvBNLlyJ05JD9apusWFNdtbEp/c6gchwCGFZjmvPMXmkQCRMRrNr+H5hw=
=JjnC
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Collins, Sean
On Fri, Oct 03, 2014 at 02:58:36PM EDT, Henry Gessau wrote:
 There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
 These bugs are quite important for IPv6 users and therefore I would like to
 lobby for getting them into a possible RC2 of Neutron Juno.

Henry and I spoke about these bugs, and I agree with his assessment. +1!
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-03 Thread John Griffith
On Fri, Oct 3, 2014 at 1:14 PM, Jason Rist jr...@redhat.com wrote:

 On 10/01/2014 08:22 PM, Tristan Cacqueray wrote:
  Hello Cinder and TripleO contributors,
 
  Just a quick reminder that elections are closing soon, if you haven't
  already you should use your right to vote and pick your favourite
 candidate!
 
  Thanks for your time!
  Tristan
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Hi Tristan - I'm not a super heavy contributor, but I think I have a few
 commits.  I didn't ever get an email from the voting provider?

 Thanks,
 Jason

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Hey Jason,

I could be wrong, but it looks like you've committed to Tuskar and
Horizon​.  Since those were uncontested you would not receive a voting
invite.  You only get to vote on the projects you commit to.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-03 Thread Dougal Matthews
Are the commits to Tuskar or Tuskar UI? Tuskar is under the tripleo project 
group but Tuskar UI is under Horizon IIRC.

On Fri, Oct 3, 2014 at 8:22 PM, John Griffith john.griff...@solidfire.com
wrote:

 On Fri, Oct 3, 2014 at 1:14 PM, Jason Rist jr...@redhat.com wrote:
 On 10/01/2014 08:22 PM, Tristan Cacqueray wrote:
  Hello Cinder and TripleO contributors,
 
  Just a quick reminder that elections are closing soon, if you haven't
  already you should use your right to vote and pick your favourite
 candidate!
 
  Thanks for your time!
  Tristan
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Hi Tristan - I'm not a super heavy contributor, but I think I have a few
 commits.  I didn't ever get an email from the voting provider?

 Thanks,
 Jason

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Hey Jason,
 I could be wrong, but it looks like you've committed to Tuskar and
 Horizon​.  Since those were uncontested you would not receive a voting
 invite.  You only get to vote on the projects you commit to.
 Thanks,
 John___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-03 Thread John Griffith
On Fri, Oct 3, 2014 at 1:24 PM, Dougal Matthews dou...@dougalmatthews.com
wrote:

 Are the commits to Tuskar or Tuskar UI? Tuskar is under the tripleo
 project group but Tuskar UI is under Horizon IIRC.


 On Fri, Oct 3, 2014 at 8:22 PM, John Griffith john.griff...@solidfire.com
  wrote:



 On Fri, Oct 3, 2014 at 1:14 PM, Jason Rist jr...@redhat.com wrote:

 On 10/01/2014 08:22 PM, Tristan Cacqueray wrote:
  Hello Cinder and TripleO contributors,
 
  Just a quick reminder that elections are closing soon, if you haven't
  already you should use your right to vote and pick your favourite
 candidate!
 
  Thanks for your time!
  Tristan
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Hi Tristan - I'm not a super heavy contributor, but I think I have a few
 commits.  I didn't ever get an email from the voting provider?

 Thanks,
 Jason

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ​Hey Jason,

 I could be wrong, but it looks like you've committed to Tuskar and
 Horizon​.  Since those were uncontested you would not receive a voting
 invite.  You only get to vote on the projects you commit to.

 Thanks,
 John



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Tuskar-UI:
https://review.openstack.org/#/q/owner:+%22Jason+E.+Rist%22,n,z​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Armando M.
I have all of these bugs on my radar, and I want to fast track them
for merging in the next few days.

Please tag the bug reports with 'juno-rc-potential'.

For each of them we can discuss the loss of functionality they cause.
If no workaround can be found, we should definitely cut an RC2.

Armando

On 3 October 2014 12:21, Collins, Sean sean_colli...@cable.comcast.com wrote:
 On Fri, Oct 03, 2014 at 02:58:36PM EDT, Henry Gessau wrote:
 There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
 These bugs are quite important for IPv6 users and therefore I would like to
 lobby for getting them into a possible RC2 of Neutron Juno.

 Henry and I spoke about these bugs, and I agree with his assessment. +1!
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-03 Thread Anita Kuno
On 10/03/2014 03:36 PM, John Griffith wrote:
 On Fri, Oct 3, 2014 at 1:24 PM, Dougal Matthews dou...@dougalmatthews.com
 wrote:
 
 Are the commits to Tuskar or Tuskar UI? Tuskar is under the tripleo
 project group but Tuskar UI is under Horizon IIRC.


 On Fri, Oct 3, 2014 at 8:22 PM, John Griffith john.griff...@solidfire.com
 wrote:



 On Fri, Oct 3, 2014 at 1:14 PM, Jason Rist jr...@redhat.com wrote:

 On 10/01/2014 08:22 PM, Tristan Cacqueray wrote:
 Hello Cinder and TripleO contributors,

 Just a quick reminder that elections are closing soon, if you haven't
 already you should use your right to vote and pick your favourite
 candidate!

 Thanks for your time!
 Tristan



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Hi Tristan - I'm not a super heavy contributor, but I think I have a few
 commits.  I didn't ever get an email from the voting provider?

 Thanks,
 Jason

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ​Hey Jason,

 I could be wrong, but it looks like you've committed to Tuskar and
 Horizon​.  Since those were uncontested you would not receive a voting
 invite.  You only get to vote on the projects you commit to.

 Thanks,
 John



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​Tuskar-UI:
 https://review.openstack.org/#/q/owner:+%22Jason+E.+Rist%22,n,z​
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Hi all, the point is moot.

The polls closed roughly 6 hours prior to Jason's email.

We do occasionally have people with lost ballots which is why we
encourage folks to email the election officials in private, instructions
are included in the email to the list, so that we can investigate and
ensure all those eligible to vote in an election do vote.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047286.html

What to do if you don't see the email and have a commit in at least one
of the programs having an election:
  * check the trash of your gerrit Preferred Email address, in case it
went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
repos[0] and email me and Anita[2] at the below email addresses. If
we can confirm that you are entitled to vote, we will add you to
the voters list for the appropriate election.


Thanks John, and you are correct. Only Cinder and TripleO had ptl
elections this round, other program leads were by acclamation.

Tuskar-UI is under Horizon:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=sept-2014-elections#n57

Thanks and when the TC election begins a week from today if you don't
see an email with a link enabling you to vote, do email Tristan and
myself in private so that we can get you your ballot if you are eligible.

Thanks,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Contributing to docs without Docbook -- YES you can!

2014-10-03 Thread Nick Chase
On Fri, Oct 3, 2014 at 3:07 PM, Stefano Maffulli stef...@openstack.org
wrote:

 hi Nick,

 On 09/29/2014 02:06 PM, Nicholas Chase wrote:
  Because we know that the networking documentation needs particular
  attention, we're starting there.  We have a Networking Guide, from which
  we will ultimately pull information to improve the networking section of
  the admin guide.

 I love experiments and I appreciate your effort to improve the
 situation. It's not clear to me what the experiment wants to demonstrate
 and I'd appreciate more details.


Absolutely.



  The preliminary Table of Contents is here:
  https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
  instructions for contributing are as follows:

 This is cool and I see there is a blueprint also assigned

 https://blueprints.launchpad.net/openstack-manuals/+spec/create-networking-guide


Correct.




   1. Pick an existing topic or create a new topic. For new topics, we're
  primarily interested in deployment scenarios.
   2. Develop content (text and/or diagrams) in a format that supports at
  least basic markup (e.g., titles, paragraphs, lists, etc.).
   3. Provide a link to the content (e.g., gist on github.com, wiki page,
  blog post, etc.) under the associated topic.

 Points 1-3 seem to be oriented at removing Launchpad from the equation.
 Is that all there is? I guess it makes sense to remove obstacles,
 although editing the wiki (since it requires a launchpad account anyway)
 may not be the best way to track progress and see assignments.


No, really, the main change is in step 5.  Launchpad isn't the problem, as
far as we can tell; Docbook is.



   4. Send e-mail to reviewers network...@openstacknow.com.

 Why not use the docs mailing list or other facilities on @openstack.org?
 Who is responding to that address?


If someone want to provide us a list on @openstack.org, that'd be awesome.
I set up this address because I control the forwarding and could do it
immediately without having to ask for anyone's approval. :)

People on the alias are myself, Edgar Magana, Matt Kasawara, Phil Hopkins,
Anne Gentle, and Elke Vorheis.



   5. A writer turns the content into an actual patch, with tracking bug,
  and docs reviewers (and the original author, we would hope) make
  sure it gets reviewed and merged.

 This is puzzling: initially I thought that you had some experimental
 magic software that would monitor edits to the wiki TOC page, go grab
 html content from gist, blog post, etc, transform that into docbook or
 something similar and magically create a task on LP for a doc writer to
 touch up and send for review.


Wouldn't THAT be fantastic.  No, unfortunately not.  This is a process
experiment, rather than a technology experiment.


 My understanding is that the Docs team has been using bug reports on
 Launchpad to receive contributions and a writer would pick them from the
 list, taking care of the transformation to docbook and gerrit workflow.


Bug reports are great, and we do want to continue getting those -- and the
more information for the writer, the better! -- but that's a process where
the developer says, hey, I think you should write something about X.
This is the opposite.  We're saying, Hey, we want to write about X, does
anybody have any resources?  Or if you think we should write about Y, do
you have something already fleshed out (versus a paragraph you'd add in a
bug report)?


 Point 5. makes the experiment look like the process already in place,
 only using a wiki page first (instead of a blueprint first) and a
 private email address instead of a public bug tracker.


Well, you're half-right.  It's like the process in already in place, only
using a wiki page first and having a dedicated writer pick a developer's
brain and actually produce the prose and put it into Docbook, rather than
holding a gun to the developer's head and forcing him or her to write
Docbook in order to contribute to the docs.


 Have I got it wrong? Can you explain a bit more why this experiment is
 not using the existing process? What is the experiment trying to
 demonstrate?


The experiment is trying to determine whether we can increase the level of
developer participation in the docs process by removing the hurtles of:

1)  Deciphering where in the docs repo content goes
2)  Learning XML in general, and Docbook in particular
3)  Figuring out how to get docs to build
4)  And so on, until the additions are actually merged

Does that clear it up?

Thanks...

  Nick



 /stef

 --
 Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Eoghan Glynn


 So a bit of background here. This began from thinking about functional
 dependencies, and pondering whether a map of the dependency graph of
 our projects could inform our gating structure, specifically to
 encourage (or dare I say, actually force) all of us (the project
 teams) to become more cognizant of the API contracts we are making
 with each other, and the pain it causes when we break those contracts.
 
 Let's not extend this exercise to a gigantic
 everything-needs-everything-to-do-everything picture, which is where
 it's heading now. Sure, telemetry is important for operators, and in
 no way am I saying anything else when I say: for Nova to fulfill its
 primary goal, telemetry is not necessary. It is optional. Desired, but
 optional.

I don't follow the optional-but-not-necessary argument here, or
where you're applying the cut-off for the graph not turning into
a gigantic picture.

There were a bunch of relationships in the original graph that
are not strictly necessary for nova to fulfill it's primary goal,
but are desired and existing functional dependencies in any case.

So, are we trying to capture all dependencies here, or to apply
some value-judgement and selectively capture just the good
dependencies, for some definition of good?

 Even saying nova CAN-USE ceilometer is incorrect, though, since Nova
 isn't actually using Ceilometer to accomplish any functional task
 within it's domain. More correct would be to say Ceilometer
 CAN-ACCEPT notifications FROM Nova. Incidentally, this is very
 similar to the description of Heat and Horizon, except that, instead
 of consuming a public API, Ceilometer is consuming something else
 (internal notifications).

In addition to consuming notifications from nova, ceilometer also
calls out to the public nova, keystone, glance, neutron, and swift
APIs.

Hence: ceilometer CANUSE [nova, keystone, glance, neutron, swift].

In addition, the ceilometer API is invoked by heat and horizon.

I've submitted a pull request with these relationships, neglecting
the consumes-notifications-from relationship for now.

Cheers,
Eoghan

 -Deva
 
 On Fri, Oct 3, 2014 at 10:38 AM, Chris Dent chd...@redhat.com wrote:
  On Fri, 3 Oct 2014, Joe Gordon wrote:
 
  data is coming from here:
  https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
  and the key is here: https://github.com/jogo/graphing-openstack
 
 
  Cool, thanks.
 
  Many of those services expect[1] to be able to send notifications (or
  be polled by) ceilometer[2]. We've got an ongoing thread about the need
  to contractualize notifications. Are those contracts (or the desire
  for them) a form of dependency? Should they be?
 
 
  So in the case of notifications, I think that is a Ceilometer CAN-USE Nova
  THROUGH notifications
 
 
  Your statement here is part of the reason I asked. I think it is
  possible to argue that the dependency has the opposite order: Nova might
  like to use Ceilometer to keep metrics via notifications or perhaps:
  Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.
 
  This is perhaps not the strict technological representation of the
  dependency, but it represents the sort of pseudo-social
  relationships between projects: Nova desires for Ceilometer (or at
  least something doing telemetry) to exist.
 
  Ceilometer itself is^wshould be agnostic about what sort of metrics are
  coming its way. It should accept them, potentially transform them, store
  them, and make them available for later use (including immediately). It
  doesn't^wshouldn't really care if Nova exists or not.
 
  There are probably lots of other relationships of this form between
  other services, thus the question: Is a use-of-notifications
  something worth tracking? I would say yes.
 
 
  --
  Chris Dent tw:@anticdent freenode:cdent
  https://tank.peermore.com/tanks/cdent
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Chris Friesen

On 10/03/2014 02:33 PM, Eoghan Glynn wrote:




So a bit of background here. This began from thinking about functional
dependencies, and pondering whether a map of the dependency graph of
our projects could inform our gating structure, specifically to
encourage (or dare I say, actually force) all of us (the project
teams) to become more cognizant of the API contracts we are making
with each other, and the pain it causes when we break those contracts.

Let's not extend this exercise to a gigantic
everything-needs-everything-to-do-everything picture, which is where
it's heading now. Sure, telemetry is important for operators, and in
no way am I saying anything else when I say: for Nova to fulfill its
primary goal, telemetry is not necessary. It is optional. Desired, but
optional.


I don't follow the optional-but-not-necessary argument here, or
where you're applying the cut-off for the graph not turning into
a gigantic picture.

There were a bunch of relationships in the original graph that
are not strictly necessary for nova to fulfill it's primary goal,
but are desired and existing functional dependencies in any case.

So, are we trying to capture all dependencies here, or to apply
some value-judgement and selectively capture just the good
dependencies, for some definition of good?


I would suggest that we look at things from the point of view of what 
the component needs in order to accomplish its goals.


As an example, for nova to do anything useful it needs 
neutron/keystone/glance.


If the end-user wants persistent block storage, then nova needs cinder.

If the end-user wants object storage, then nova needs swift.

For heat to be really useful it needs both ceilometer and nova.


On the other hand, nova doesn't *need* 
ceilometer/heat/trove/ironic/zaqar) for anything.



In terms of integration testing, this means that a change within 
ceilometer/heat/trove/ironic/zaqar/etc. that breaks them should not 
block submissions to nova.  On the other hand a change within neutron 
that breaks it probably will block submissions to nova.


On the flip side, it may be beneficial for 
ceilometer/heat/trove/ironic/zaqar/etc. to develop against a stable 
snapshot of nova/neutron/keystone/glance so that they're isolated 
against changes that break them.  (Though they may want to run 
integration tests against the most recent versions to catch API breakage 
early.)


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 1:33 PM, Eoghan Glynn egl...@redhat.com wrote:


 So a bit of background here. This began from thinking about functional
 dependencies, and pondering whether a map of the dependency graph of
 our projects could inform our gating structure, specifically to
 encourage (or dare I say, actually force) all of us (the project
 teams) to become more cognizant of the API contracts we are making
 with each other, and the pain it causes when we break those contracts.

 Let's not extend this exercise to a gigantic
 everything-needs-everything-to-do-everything picture, which is where
 it's heading now. Sure, telemetry is important for operators, and in
 no way am I saying anything else when I say: for Nova to fulfill its
 primary goal, telemetry is not necessary. It is optional. Desired, but
 optional.

 I don't follow the optional-but-not-necessary argument here, or
 where you're applying the cut-off for the graph not turning into
 a gigantic picture.

 There were a bunch of relationships in the original graph that
 are not strictly necessary for nova to fulfill it's primary goal,
 but are desired and existing functional dependencies in any case.


For nova to do anything useful at all, it very simply needs an
identity service (keystone), an image registry service (glance), and a
network service (neutron (ignoring the fact that nova-network is still
there because we actually want it to go away)). Without these, Nova is
utterly useless.

So, from a minimalist compute-centric perspective, THAT'S IT.

 So, are we trying to capture all dependencies here, or to apply
 some value-judgement and selectively capture just the good
 dependencies, for some definition of good?

Nope. I am not making any value judgement whatsoever. I'm describing
dependencies for minimally satisfying the intended purpose of a given
project. For example, Nova's primary goal is not emit telemetry, it
is scalable, on demand, self service access to compute resources [1]

There are a lot of other super-awesome additional capabilities for
which Nova depends on other services. And folks want to add more cool
things on top of, next to, and underneath this ring compute. And
make new non-compute-centric groups of projects. That's all wonderful.

I happen to also fall in that camp - I think Ironic is a useful
service, but I'm happy for it to not be in that inner ring of
codependency. The nova.virt.ironic driver is optional from Nova's POV
(Nova works fine without it), and Nova is optional from Ironic's POV
(it's a bit awkward, but Ironic can deploy without Nova, though we're
not testing it like that today).

On the other hand, from a minimalist telemetry-centric perspective,
what other projects do I need to run Ceilometer? Correct me if I'm
wrong, but I think the answer is None. Or rather, which ever ones I
want. If I'm running Nova and not running Swift, Ceilometer works with
Nova. If I'm running Swift but not Nova, Ceilometer works with Swift.
There's no functional dependency on either Nova or Swift here - it's
just consumption of an API. None of which is bad - but this informs
how we gate test these projects, and how we allocate certain resources
(like debugging gate-breaking bugs) across projects.


-Devananda

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n6

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Devananda van der Veen
On Fri, Oct 3, 2014 at 2:13 PM, Chris Friesen
chris.frie...@windriver.com wrote:
 On 10/03/2014 02:33 PM, Eoghan Glynn wrote:



 So a bit of background here. This began from thinking about functional
 dependencies, and pondering whether a map of the dependency graph of
 our projects could inform our gating structure, specifically to
 encourage (or dare I say, actually force) all of us (the project
 teams) to become more cognizant of the API contracts we are making
 with each other, and the pain it causes when we break those contracts.

 Let's not extend this exercise to a gigantic
 everything-needs-everything-to-do-everything picture, which is where
 it's heading now. Sure, telemetry is important for operators, and in
 no way am I saying anything else when I say: for Nova to fulfill its
 primary goal, telemetry is not necessary. It is optional. Desired, but
 optional.


 I don't follow the optional-but-not-necessary argument here, or
 where you're applying the cut-off for the graph not turning into
 a gigantic picture.

 There were a bunch of relationships in the original graph that
 are not strictly necessary for nova to fulfill it's primary goal,
 but are desired and existing functional dependencies in any case.

 So, are we trying to capture all dependencies here, or to apply
 some value-judgement and selectively capture just the good
 dependencies, for some definition of good?


 I would suggest that we look at things from the point of view of what the
 component needs in order to accomplish its goals.

 As an example, for nova to do anything useful it needs
 neutron/keystone/glance.

 If the end-user wants persistent block storage, then nova needs cinder.

 If the end-user wants object storage, then nova needs swift.

 For heat to be really useful it needs both ceilometer and nova.


 On the other hand, nova doesn't *need* ceilometer/heat/trove/ironic/zaqar)
 for anything.


 In terms of integration testing, this means that a change within
 ceilometer/heat/trove/ironic/zaqar/etc. that breaks them should not block
 submissions to nova.  On the other hand a change within neutron that breaks
 it probably will block submissions to nova.


Yep, exactly.

And conversely, if a change in Nova *did* break
ceilometer/heat/trove/ironic/zaqar/etc, well, *shit*, someone screwed
up. Either we just broke an API contract in Nova and we should
immediately revert that, or someone was depending on an unstable API
that they shouldn't have been, or they were depending on undocumented
behavior which they really shouldn't have been. But in every case, we,
the developers, did something bad, and should fix it.

 On the flip side, it may be beneficial for
 ceilometer/heat/trove/ironic/zaqar/etc. to develop against a stable snapshot
 of nova/neutron/keystone/glance so that they're isolated against changes
 that break them.  (Though they may want to run integration tests against the
 most recent versions to catch API breakage early.)


++

What I'm proposing moves us towards this, but I don't think we're
ready for it today.


Cheers,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-03 Thread Monty Taylor
On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance because 
 of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html

 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html

 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.

 It also keeps a single Cinder, Glance, Neutron deployment per region.

 It would be great to get some help hardening, testing, and building out more 
 of
 the cells vision. I suspect we may form a new Nova subteam to trying and 
 drive
 this work forward in kilo, if we can build up enough people wanting to work 
 on
 improving cells.

 
 At CERN, we've deployed cells at scale but are finding a number of 
 architectural issues that need resolution in the short term to attain feature 
 parity. A vision of we all run cells but some of us have only one is not 
 there yet. Typical examples are flavors, security groups and server groups, 
 all of which are not yet implemented to the necessary levels for cell 
 parent/child.
 
 We would be very keen on agreeing the strategy in Paris so that we can ensure 
 the gap is closed, test it in the gate and that future features cannot 
 'wishlist' cell support.

I agree with this. I know that there are folks who don't like cells -
but I think that ship has sailed. It's there - which means we need to
make it first class.

 Resources can be made available if we can agree the direction but current 
 reviews are not progressing (such as 
 https://bugs.launchpad.net/nova/+bug/1211011)
 
 Tim
 
 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [sahara] Sahara 2014.1 Bad network format issue

2014-10-03 Thread Erming Pei

Hi,

 I have already installed the latest Sahara packages. While when I was 
trying to start a cluster from Horizon, I still encountered the  Bad 
network format issue, which is said something wrong with the mapping of 
neutron_management_network to net_id.


2014-10-03 13:13:43.956 704 TRACE sahara.context BadRequest: Bad network 
format: missing 'uuid' (HTTP 400) (Request-ID: 
req-c1eaf072-3a62-43c0-94cc-25d0d30b8ef1)


Any comment to this issue?

Is there a concrete example for me to run from CLI for the 
cluster-create command (especially the JSON part).



Thanks,

Erming



On 4/17/14 2:28 PM, Sergey Lukjanov wrote:

Hi everyone,

I'm glad to announce the final release of Sahara 2014.1 Icehouse.
During this cycle we've completed 58 blueprint and fixed 124 bugs.

You can find source tarballs with complete lists of features and bug fixes:

https://launchpad.net/sahara/icehouse/2014.1

Release notes contain an overview of key new features:
https://wiki.openstack.org/wiki/Sahara/ReleaseNotes/Icehouse

Thanks!





--

 Erming Pei,  Senior System Analyst

 Information Services  Technology
 University of Alberta, Canada

 Tel: 7804929914Fax: 7804921729



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Barbican Integration for Advanced Services

2014-10-03 Thread Adam Harwell
I've made an attempt at mapping out exactly how Neutron Advanced Services will 
communicate with Barbican to retrieve Certificate/Key info for TLS purposes. 
These diagrams have gone through several revisions, but are still an early 
draft of the interactions: http://imgur.com/a/4u6Oz

Note that these diagrams use Neutron-LBaaS as the example use-case, but the 
flow would be essentially the same for any service (FWaaS, VPNaaS, etc). The 
code that handles this will be in neutron/common/ so that it can be used by any 
extension. There is a WIP CR here (though right now it doesn't look anything 
like the final version, including very badly named and organized functions): 
https://review.openstack.org/#/c/123492/

Hopefully this is not a new concept, as I believe we agreed during the Atlanta 
summit that using Barbican to store TLS cert/key data was the appropriate path 
forward for Neutron (and other OpenStack projects).

I assume there may be other teams investigating very similar integration 
schemes as well, so if anyone has comments or suggestions, I'd love to hear 
them.

Thanks,
--Adam Harwell

https://keybase.io/rm_you

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] hold off on making releases

2014-10-03 Thread Sean Dague
On 10/02/2014 06:08 PM, Doug Hellmann wrote:
 Sean Dague is working on adjusting the gate tests related to Oslo libraries 
 and the integrated projects. I don’t think we have any releases planned, but 
 just in case:
 
 Please wait to tag any new releases of any Oslo libraries until this work is 
 complete to ensure that the new jobs are functioning properly and that we are 
 actually running the tests that Jenkins reports as passing.
 
 Either Sean or I will follow up to this email when the coast is clear again.
 
 Thanks!
 Doug

So... it's not quite all clear yet, but as soon as we get project config
approval on https://review.openstack.org/#/c/126082/ I think we're ready
to go.

For people that weren't keeping up on the released libraries thread,
here's what we've done.

OpenStack Components are now tested with released versions of oslo
libraries by default. So all the gate jobs aren't using the git released
versions, they are using released versions. If you add new features that
a project wants to use, you'll need to cut a release.

To ensure backwards compatibility, you'll notice a bunch of new jobs
called -tempest-dsvm-(.*)src-LIBRARYNAME

This build a devstack environment where oslo libraries are all at the
released version *except* for LIBRARYNAME. So that will give you a
commit test there.

I've added a nova network, and a neutron version of those jobs for all
libs. And a largeops version for oslo.db, oslo.messaging, and
oslo.rootwrap (which used to gate on largeops jobs).

The impact after https://review.openstack.org/#/c/125720/ lands (which
removes the old jobs which don't do what you think any more) is that
oslo libraries will no longer cogate with the integrated release. They
will instead each end up with their own gate where they are testing
their proposed commits vs. the rest of the environment.

That decoupling should be good for everyone. It will massively shorten
oslo lib gate pipelines. It will take a bunch of stuff out of the
integrated gate. It will ensure that integrated projects don't depend on
unreleased library features (big win for ops). And it will provide every
developer with a cookie and a unicorn.*

Updated docs on oslo lib creation will come early next week.

-Sean

* no actual unicorns, we can probably do cookies at summit.

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [sahara] Sahara 2014.1 Bad network format issue

2014-10-03 Thread Erming Pei

Just figured it out:


# cat cluster_create.json
{
   name: cluster-1,
   plugin_name: vanilla,
   hadoop_version: 2.3.0,
   cluster_template_id : 8300d4a7-c1aa-4984-8528-9f250a6d175f,
   user_keypair_id: realkey1,
   default_image_id: 43b4aa35-6579-4fae-b507-be17aab8fd36,
   neutron_management_network: 1431de46-fb58-4dfd-a866-e5195550b54d
}

# sahara cluster-create --json=cluster_create.json

Now I can see the neutron_management_network ID and it's creating.



Cheers,
Erming


On 10/3/14 3:57 PM, Erming Pei wrote:

Hi,

 I have already installed the latest Sahara packages. While when I was 
trying to start a cluster from Horizon, I still encountered the  Bad 
network format issue, which is said something wrong with the mapping 
of neutron_management_network to net_id.


2014-10-03 13:13:43.956 704 TRACE sahara.context BadRequest: Bad 
network format: missing 'uuid' (HTTP 400) (Request-ID: 
req-c1eaf072-3a62-43c0-94cc-25d0d30b8ef1)


Any comment to this issue?

Is there a concrete example for me to run from CLI for the 
cluster-create command (especially the JSON part).



Thanks,

Erming



On 4/17/14 2:28 PM, Sergey Lukjanov wrote:

Hi everyone,

I'm glad to announce the final release of Sahara 2014.1 Icehouse.
During this cycle we've completed 58 blueprint and fixed 124 bugs.

You can find source tarballs with complete lists of features and bug 
fixes:


https://launchpad.net/sahara/icehouse/2014.1

Release notes contain an overview of key new features:
https://wiki.openstack.org/wiki/Sahara/ReleaseNotes/Icehouse

Thanks!








--

 Erming Pei,  Senior System Analyst

 Information Services  Technology
 University of Alberta, Canada

 Tel: 7804929914Fax: 7804921729



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-03 Thread Angus Lees
You can turn off lots of the refactor recommendation checks.  I've been
running pylint across neutron and it's uncovered half a dozen legitimate
bugs so far - and that's with many tests still disabled.

I agree that the defaults are too noisy, but its about the only tool that
does linting across files - pep8 for example only looks at the current file
(and not even the parse tree).
On 4 Oct 2014 03:22, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 3, 2014, at 1:09 PM, Neal, Phil phil.n...@hp.com wrote:

  From: Dina Belova [mailto:dbel...@mirantis.com]
  On Friday, October 03, 2014 2:53 AM
 
  Igor,
 
  Personally this idea looks really nice to me, as this will help to avoid
  strange code being merged and not found via reviewing process.
 
  Cheers,
  Dina
 
  On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov
  idegtia...@mirantis.com wrote:
  Hi folks!
 
  I try too guess do we need in ceilometer checking new patches for
  critical errors with pylint?
 
  As far as I know Nova and Sahara and others have such check. Actually
  it is not checking of all project but comparing of the number of
  errors without new patch and with it, and if diff is more then 0 then
  patch are not taken.
 
  Looking a bit deeper it seems that Nova struggled with false positives
 and resorted to https://review.openstack.org/#/c/28754/ , which layers
 some historical checking of git on top of pylint's tendency to check only
 the latest commit. I can't say I'm too deeply versed in the code,  but it's
 enough to make me wonder if we want to go that direction and avoid the
 issues altogether?

 I haven’t looked at it in a while, but I’ve never been particularly
 excited by pylint. It’s extremely picky, encourages enforcing some
 questionable rules (arbitrary limits on variable name length?), and repots
 a lot of false positives. That combination tends to result in making
 writing software annoying without helping with quality in any real way.

 Doug

 
 
  I have taken as pattern Sahara's solution and proposed a patch for
  ceilometer:
  https://review.openstack.org/#/c/125906/
 
  Cheers,
  Igor Degtiarov
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Best regards,
  Dina Belova
  Software Engineer
  Mirantis Inc.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] hold off on making releases

2014-10-03 Thread Jeremy Stanley
On 2014-10-03 18:13:58 -0400 (-0400), Sean Dague wrote:
 So... it's not quite all clear yet, but as soon as we get project
 config approval on https://review.openstack.org/#/c/126082/ I
 think we're ready to go.
[...]

Approved and merged, by the way.

 The impact after https://review.openstack.org/#/c/125720/ lands (which
 removes the old jobs which don't do what you think any more) is that
 oslo libraries will no longer cogate with the integrated release.
[...]

Also approved and merged now.

 * no actual unicorns, we can probably do cookies at summit.

Aww, and I was all set to pack a bottle of my special barbecue sauce
made from crushed fairies.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-based Policy] Database migration chain

2014-10-03 Thread Ivar Lazzaro
Hi,

Following up the latest GBP team meeting [0][1]:

As we keep going with our Juno stackforge implementation [2], although the
service is effectively a Neutron extension, we should avoid breaking
Neutron's migration chain by adding our model on top of it (and
subsequently changing Neutron's HEAD [3]). If we do that, upgrading from
Juno to Kilo will be painful for those who have used GBP.

There are roughly a couple of possibilities for reaching this goal:

1) Using a separate DBs with separate migration chain;
2) Using the same DB with separate chain (and different alembic version
table).

Personally I prefer the option 1, moving to a completely different database
while leveraging cross database foreign keys.

Please let me know your preference, or alternative solutions! :)

Cheers,
Ivar.

[0]
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-09-25-18.02.log.html
[1]
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-10-02-18.01.log.html
[2] https://github.com/stackforge/group-based-policy
[3] https://review.openstack.org/#/c/123617/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][docs][tc] How to scale Documentation

2014-10-03 Thread Zane Bitter
Amidst all the discussion about layers and tents there has been this 
lurking issue about Docs and their need to prioritise work that we 
haven't really done a deep dive on yet. I'd like to start that 
discussion by summarising my understanding of the situation, and 
hopefully Anne and others can jump in and tell me what I've gotten 
horribly wrong.


AIUI back in the day, all of the documentation for OpenStack was handled 
in a centralised way by the Docs team. We can all agree that doesn't 
scale, and that was recognised early on during the project's expansion.


The current system is something of a hybrid model - for some subset of 
official projects considered important, the Docs team is directly 
responsible; for the others, the project team has to write the 
documentation. The docs team is available to provide support and tools 
for other official projects.


It's not clear how important is currently defined... one suspects it's 
by date of accession ;)


The prospect of a much larger tent with many more projects in 
OpenStack-proper shines a spotlight on the scalability of the Docs team, 
and in particular how they decide which projects are important to work 
on directly.


There seems to be an implicit argument floating around that there are a 
bunch of projects that depend on each other and we need to create a 
governance structure around blessing this, rather than merely observing 
it as an emergent property of the gate, at least in part because that's 
the only way to tell the Docs team what is important.


There are two problems with this line of thought IMHO. The obvious one 
is of course that the number of bi-directional dependencies of a project 
is not *in any way* related to how important it is to have documentation 
for it. It makes about as much sense as saying that we're only going to 
document projects whose names contain the letter 't'.


The second is that it implies that the Docs team are the only people in 
OpenStack who need the TC to tell them what to work on. Developers 
generally work on what they or their employer feels is important. If you 
see something being neglected you either try to help out or pass the 
word up that more resources are needed in a particular area. Is there 
any reason to think it wouldn't work the same with technical writers? Is 
anyone seriously worried that Nova will go undocumented unless the TC 
creates a project structure that specifically calls it out as important?


What I'm envisioning is that we would move to a completely distributed 
model, where the documentation for each project is owned by that 
project. People who care about the project being documented would either 
work on it themselves and/or recruit developers and/or technical writers 
to do so. The Docs program itself would concentrate on very high-level 
non-project-specific documentation and on standards and tools to help 
deliver a consistent presentation across projects. Of course we would 
expect there to be considerable overlap between the people contributing 
to the Docs program itself and those contributing to documentation of 
the various projects. Docs would also have a formal liaison program to 
ensure that information is disseminated rapidly and evenly.


Is there any part of this that folks don't think would work? How many 
projects could it scale to? Is there a reason we need the TC to bless a 
subset of projects, or is it reasonable to expect that individual docs 
contributors can select what is most important to work on in much the 
same way that most contributors do?


I know there are a lot of subtleties of which I have hitherto remained 
blissfully ignorant. Let's get those out in the open and get to the 
bottom of what impact, if any, the docs scaling issue needs to have on 
our governance structure.


cheers,
Zane.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][docs][tc] How to scale Documentation

2014-10-03 Thread Anne Gentle
On Fri, Oct 3, 2014 at 8:18 PM, Zane Bitter zbit...@redhat.com wrote:

 Amidst all the discussion about layers and tents there has been this
 lurking issue about Docs and their need to prioritise work that we haven't
 really done a deep dive on yet. I'd like to start that discussion by
 summarising my understanding of the situation, and hopefully Anne and
 others can jump in and tell me what I've gotten horribly wrong.

 AIUI back in the day, all of the documentation for OpenStack was handled
 in a centralised way by the Docs team. We can all agree that doesn't scale,
 and that was recognised early on during the project's expansion.

 The current system is something of a hybrid model - for some subset of
 official projects considered important, the Docs team is directly
 responsible; for the others, the project team has to write the
 documentation. The docs team is available to provide support and tools for
 other official projects.


I think the Docs team isn't directly responsible for any project; we work
with all projects.


 It's not clear how important is currently defined... one suspects it's
 by date of accession ;)


There's more and more docs coverage in the common docs repos the longer a
project has been around, but that's not always due to doc team only work.

This wiki page attempts to describe current integrating/incubating
processing for docs.
https://wiki.openstack.org/wiki/Documentation/IncubateIntegrate

Thing is, it's tough to say that any given project has completed docs. Nova
still probably has the lead with the most doc bugs since Xen is barely
documented and there are a lot of doc bugs for the Compute API versions. So
it's possible to measure with our crude instruments and even with those
measurements, inner chewy center projects could be called under-documented.


 The prospect of a much larger tent with many more projects in
 OpenStack-proper shines a spotlight on the scalability of the Docs team,
 and in particular how they decide which projects are important to work on
 directly.

 There seems to be an implicit argument floating around that there are a
 bunch of projects that depend on each other and we need to create a
 governance structure around blessing this, rather than merely observing it
 as an emergent property of the gate, at least in part because that's the
 only way to tell the Docs team what is important.

 There are two problems with this line of thought IMHO. The obvious one is
 of course that the number of bi-directional dependencies of a project is
 not *in any way* related to how important it is to have documentation for
 it. It makes about as much sense as saying that we're only going to
 document projects whose names contain the letter 't'.

 The second is that it implies that the Docs team are the only people in
 OpenStack who need the TC to tell them what to work on. Developers
 generally work on what they or their employer feels is important. If you
 see something being neglected you either try to help out or pass the word
 up that more resources are needed in a particular area. Is there any reason
 to think it wouldn't work the same with technical writers? Is anyone
 seriously worried that Nova will go undocumented unless the TC creates a
 project structure that specifically calls it out as important?

 What I'm envisioning is that we would move to a completely distributed
 model, where the documentation for each project is owned by that project.
 People who care about the project being documented would either work on it
 themselves and/or recruit developers and/or technical writers to do so. The
 Docs program itself would concentrate on very high-level
 non-project-specific documentation and on standards and tools to help
 deliver a consistent presentation across projects. Of course we would
 expect there to be considerable overlap between the people contributing to
 the Docs program itself and those contributing to documentation of the
 various projects. Docs would also have a formal liaison program to ensure
 that information is disseminated rapidly and evenly.


I still maintain the thinking that common OpenStack docs are very valuable
and the information architecture across projects could only go so far to
truly serve users.

Here's my current thinking and plan of attack on multiple fronts. Oh, that
analogy is so militaristic, I'd revise more peacefully but ... time. :)

1. We need better page-based design and navigation for many of our docs.
I'm working with the Foundation on a redesign. This may include simpler
source files.
2. We still need book-like output. Examples of books include the
Installation Guides, Operations Guide, the Security Guide, and probably the
Architecture Design Guide. Every other bit of content can go into pages if
we have decent design, information architecture, navigation, and
versioning. Except maybe the API reference [0], that's a special beast.
3. We need better maintenance and care of app developer docs like the API

Re: [openstack-dev] [all][docs][tc] How to scale Documentation

2014-10-03 Thread Chris Friesen

On 10/03/2014 07:50 PM, Anne Gentle wrote:


Here's my current thinking and plan of attack on multiple fronts. Oh,
that analogy is so militaristic, I'd revise more peacefully but ...
time. :)

1. We need better page-based design and navigation for many of our docs.
I'm working with the Foundation on a redesign. This may include simpler
source files.
2. We still need book-like output. Examples of books include the
Installation Guides, Operations Guide, the Security Guide, and probably
the Architecture Design Guide. Every other bit of content can go into
pages if we have decent design, information architecture, navigation,
and versioning. Except maybe the API reference [0], that's a special beast.
3. We need better maintenance and care of app developer docs like the
API Reference. This also includes simpler source files.


Just curious, has anyone considered rejigging things so that each 
component has a single definition of its API that could then be used to 
mechanically generate both the API validation code as well as the API 
reference documentation?


It seems silly to have to do the same work twice.  That's what computers 
are for. :)


Or is the up-front effort too high to make it worthwhile?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-based Policy] Database migration chain

2014-10-03 Thread Kevin Benton
Does sqlalchemy have good support for cross-database foreign keys? I was
under the impression that they cannot be implemented with the normal syntax
and semantics of an intra-database foreign-key constraint.

On Fri, Oct 3, 2014 at 5:25 PM, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Hi,

 Following up the latest GBP team meeting [0][1]:

 As we keep going with our Juno stackforge implementation [2], although the
 service is effectively a Neutron extension, we should avoid breaking
 Neutron's migration chain by adding our model on top of it (and
 subsequently changing Neutron's HEAD [3]). If we do that, upgrading from
 Juno to Kilo will be painful for those who have used GBP.

 There are roughly a couple of possibilities for reaching this goal:

 1) Using a separate DBs with separate migration chain;
 2) Using the same DB with separate chain (and different alembic version
 table).

 Personally I prefer the option 1, moving to a completely different
 database while leveraging cross database foreign keys.

 Please let me know your preference, or alternative solutions! :)

 Cheers,
 Ivar.

 [0]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-09-25-18.02.log.html
 [1]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-10-02-18.01.log.html
 [2] https://github.com/stackforge/group-based-policy
 [3] https://review.openstack.org/#/c/123617/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev