Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Vladimir Kozhukalov
Roman,

I am absolutely +1 for re-designing fuel client and bringing it out of
fuel-web repo.

If you ask me, it is also important to make new design following kind of
standard just to avoid re-re-designing it in the foreseeable future. Some
points here are:
0) Rename fuelclient into python-fuelclient like any other OpenStack
clients when moving it to a separate repo.
1) Use cliff as a cli library. AFAIU it is a kind of unofficial standard
for OpenStack clients for future. At least python-openstackclient uses
cliff. Correct me if I am wrong.
2) Follow common OpenStack practice for naming files and directories in a
project (shell.py, api, object, etc). I am not sure whether such a common
practice exists, but we again can follow python-openstackclient naming
model.
3) Use oslo for auth stuff (Fuel uses keystone at the moment) and wherever
it is suitable.





Vladimir Kozhukalov

On Mon, Nov 17, 2014 at 8:08 PM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 Hi folks!

 I’ve made several internal discussions with Łukasz Oleś and Igor Kalnitsky
 and decided that the existing Fuel Client has to be redesigned.
 The implementation of the client we have at the moment does not seem to be
 compliant with most of the use cases people have in production and cannot
 be used as a library-wrapper for FUEL’s API.

 We’ve came of with a draft of our plan about redesigning Fuel Client which
 you can see here: https://etherpad.openstack.org/p/fuelclient-redesign
 Everyone is welcome to add their notes, suggestions basing on their needs
 and use cases.

 The next step is to create a detailed spec and put it to everyone’s review.



 - romcheg


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread Michael Still
Hi,

as discussed at the summit, we want to do a better job of tracking the
progress of work on our priorities for Kilo. To that end, we have
agreed to discuss the current state of these at each nova meeting.

I have created this etherpad:

https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

If you are the owner of a priority, please ensure it lists the reviews
you currently require before the meeting tomorrow. If you could limit
your entry to less than five reviews, that would be good.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread Sylvain Bauza


Le 20/11/2014 10:17, Michael Still a écrit :

Hi,

as discussed at the summit, we want to do a better job of tracking the
progress of work on our priorities for Kilo. To that end, we have
agreed to discuss the current state of these at each nova meeting.

I have created this etherpad:

 https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

If you are the owner of a priority, please ensure it lists the reviews
you currently require before the meeting tomorrow. If you could limit
your entry to less than five reviews, that would be good.

Thanks,
Michael



I'm anticipating a little bit, but as next week it will be Thanksgiving 
for our US peers, do you envisage any other exceptional timeslot for 
reviewing priorities ?


As the meetings are alternating, that means that people in EU timezones 
aren't able to attend weekly meetings until Dec 11th, which is one week 
before Kilo-1 milestone.


Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-20 Thread Matsuda, Kenichiro
Hi,

I would like to know about a way of checking replicate completion on swift 
cluster.
(e.g. after rebalanced Ring)

I found the way of using swift-dispersion-report from Administrator's Guide.
But, this way is not enough, because swift-dispersion-report can't checking 
replicate completion for other data that made by not swift-dispersion-populate.

And also, I found the way of using replicator's logs from QA.
But, I would like to more easy way, because check of below logs is very heavy.

  (account/container/object)-replicator * All storage node on swift cluster

Could you please advise me for it?

Findings:
  Administrator's Guide  Cluster Health
http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
  how to check replicator work complete

https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/

Best Regards,
Kenichiro Matsuda.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread Flavio Percoco

On 20/11/14 20:17 +1100, Michael Still wrote:

Hi,

as discussed at the summit, we want to do a better job of tracking the
progress of work on our priorities for Kilo. To that end, we have
agreed to discuss the current state of these at each nova meeting.

I have created this etherpad:

   https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

If you are the owner of a priority, please ensure it lists the reviews
you currently require before the meeting tomorrow. If you could limit
your entry to less than five reviews, that would be good.


Should the glanceclient and glance_store adoption be listed there?

I've a spec[0] for glanceclient and I'll work on the glance_store one
now.

[0] https://review.openstack.org/#/c/133485/

Flavio

--
@flaper87
Flavio Percoco


pgpmPWgBJiwpY.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Salvatore Orlando
On 20 November 2014 02:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 Folks,

 Like Ian, I am jumping in this very late as well - as I decided to travel
 Europe after the summit, just returned back and  catching up :-):-)

 I have noticed that this thread has gotten fairly convoluted and painful
 to read.

 I think Armando summed it up well in the beginning of the thread. There
 are basically three written proposals (listed in Armando's email - I pasted
 them again here).

 [1] https://review.openstack.org/#/c/134179/
 [2] https://review.openstack.org/#/c/100278/
 [3] https://review.openstack.org/#/c/93613/


In this thread I have seen other specs being mentioned as related.
Namely:
1) https://review.openstack.org/#/c/93329/ (BGP VPN)
2) https://review.openstack.org/#/c/101043/ (MPLS vpn)
3) https://review.openstack.org/#/c/87825/ (external device integration)
Note that I am not saying they should be put as well in the mix. I'm only
listing them here as a recap.
There are probably other ideas not yet put in the form of a concrete
specification. In order to avoid further confusion, I would just blindly
ignore proposals which do not exist in the form a specification.



 On this thread I see that the authors of first two proposals have already
 agreed to consolidate and work together. This leaves with two proposals.
 Both Ian and I were involved with the third proposal [3] and have
 reasonable idea about it. IMO, the use cases addressed by the third
 proposal are very similar to use cases addressed by proposal [1] and [2]. I
 can volunteer to  follow up with Racha and Stephen from Ericsson to see if
 their use case will be covered with the new combined proposal. If yes, we
 have one converged proposal. If no, then we modify the proposal to
 accommodate their use case as well. Regardless, I will ask them to review
 and post their comments on [1].


One thing that I've noticed in the past is that contributors are led to
think that the owner of the specification will also be the lead for the
subsequent work. There nothing farther from truth. Sometimes I write specs
with the exact intent of having somebody else lead the implementation. So
don't feel bad to abandon a spec if you realize your use cases can be
completely included in another specification.



 Having said that, this covers what we discussed during the morning session
 on Friday in Paris. Now, comes the second part which Ian brought up in the
 afternoon session on Friday.
 My initial reaction was, when heard his use case, that this new
 proposal/API should cover that use case as well (I am being bit optimistic
 here :-)). If not, rather than going into the nitty gritty details of the
 use case, let's see what modification is required to the proposed API to
 accommodate Ian's use case and adjust it accordingly.


Unfortunately I did not attend that discussion. Possibly 90% of the people
reading this thread did not attend it. It would be nice if Ian or somebody
else posted a write-up, adding more details to what has already been shared
in this thread. If you've already done so please share a link as my
google-fu is not that good these days.



 Now, the last point (already brought up by Salvatore as well as Armando) -
 the abstraction of the API, so that it meets the Neutron API criteria. I
 think this is the critical piece. I also believe the API proposed by [1] is
 very close. We should clean it up and take out references to ToR's or
 physical vs virtual devices. The API should work at an abstract level so
 that it can deal with both physical as well virtual devices. If we can
 agree to that, I believe we can have a solid solution.


 Having said that I would like to request the community to review the
 proposal submitted by Maruti in [1] and post comments on the spec with the
 intent to get a closure on the API. I see lots of good comments already on
 the spec. Lets get this done so that we can have a workable (even if not
 perfect) version of API in Kilo cycle. Something which we can all start to
 play with. We can always iterate over it, and make change as we get more
 and more use cases covered.


Iterate is the key here I believe. As long as we pretend to achieve the
perfect API at the first attempt we'll just keep having this discussion. I
think the first time a L2 GW API was proposed, it was for Grizzly.
For instance, it might relatively easy to define an API which can handle
both physical and virtual devices. The user workflow for a ToR terminated
L2 GW is different from the workflow for a virtual appliance owned by
tenant, and this will obviously reflected in the API. On the other hand, a
BGP VPN might be a completely different use case, and therefore have a
completely different set of APIs.

Beyond APIs there are two more things to mention.
First, we need some sort of open source reference implementation for every
use case. For hardware VTEP obviously this won't be possible, but perhaps
[1] can be used for integration tests.
The 

Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread John Garbutt
On 20 November 2014 09:32, Flavio Percoco fla...@redhat.com wrote:
 On 20/11/14 20:17 +1100, Michael Still wrote:

 Hi,

 as discussed at the summit, we want to do a better job of tracking the
 progress of work on our priorities for Kilo. To that end, we have
 agreed to discuss the current state of these at each nova meeting.

 I have created this etherpad:

https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

 If you are the owner of a priority, please ensure it lists the reviews
 you currently require before the meeting tomorrow. If you could limit
 your entry to less than five reviews, that would be good.


 Should the glanceclient and glance_store adoption be listed there?

 I've a spec[0] for glanceclient and I'll work on the glance_store one
 now.

 [0] https://review.openstack.org/#/c/133485/

My memory of the conversation was, its important, but its not on the
top priority list:
https://etherpad.openstack.org/p/kilo-nova-priorities

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-20 Thread Duncan Thomas
It is quite possible that the requirement for glance to own images can be
achieved by having a glance tenant in cinder, and using clone and
volume-transfer functionalities in cinder to get copies to the right place.

I know there is some attempts to move away from the single glance tenant
model for swift usage, but doing anything else in cinder will require
significantly more thought/

On 19 November 2014 23:04, Alex Meade mr.alex.me...@gmail.com wrote:

 Hey Henry/Folks,

 I think it could make sense for Glance to store the volume UUID, the idea
 is that no matter where an image is stored it should be *owned* by Glance
 and not deleted out from under it. But that is more of a single tenant vs
 multi tenant cinder store.

 It makes sense for Cinder to at least abstract all of the block storage
 needs. Glance and any other service should reuse Cinders ability to talk to
 certain backends. It would be wasted effort to reimplement Cinder drivers
 as Glance stores. I do agree with Duncan that a great way to solve these
 issues is a third party transfer service, which others and I in the Glance
 community have discussed at numerous summits (since San Diego).

 -Alex



 On Wed, Nov 19, 2014 at 3:40 AM, henry hly henry4...@gmail.com wrote:

 Hi Flavio,

 Thanks for your information about Cinder Store, Yet I have a little
 concern about Cinder backend: Suppose cinder and glance both use Ceph
 as Store, then if cinder  can do instant copy to glance by ceph clone
 (maybe not now but some time later), what information would be stored
 in glance? Obviously volume UUID is not a good choice, because after
 volume is deleted then image can't be referenced. The best choice is
 that cloned ceph object URI also be stored in glance location, letting
 both glance and cinder see the backend store details.

 However, although it really make sense for Ceph like All-in-one Store,
 I'm not sure if iscsi backend can be used the same way.

 On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com
 wrote:
  On 19/11/14 15:21 +0800, henry hly wrote:
 
  In the Previous BP [1], support for iscsi backend is introduced into
  glance. However, it was abandoned because of Cinder backend
  replacement.
 
  The reason is that all storage backend details should be hidden by
  cinder, not exposed to other projects. However, with more and more
  interest in Converged Storage like Ceph, it's necessary to expose
  storage backend to glance as well as cinder.
 
  An example  is that when transferring bits between volume and image,
  we can utilize advanced storage offload capability like linked clone
  to do very fast instant copy. Maybe we need a more general glance
  backend location support not only with iscsi.
 
 
 
  [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
 
 
  Hey Henry,
 
  This blueprint has been superseeded by one proposing a Cinder store
  for Glance. The Cinder store is, unfortunately, in a sorry state.
  Short story, it's not fully implemented.
 
  I truly think Glance is not the place where you'd have an iscsi store,
  that's Cinder's field and the best way to achieve what you want is by
  having a fully implemented Cinder store that doesn't rely on Cinder's
  API but has access to the volumes.
 
  Unfortunately, this is not possible now and I don't think it'll be
  possible until L (or even M?).
 
  FWIW, I think the use case you've mentioned is useful and it's
  something we have in our TODO list.
 
  Cheers,
  Flavio
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread John Garbutt
On 20 November 2014 09:25, Sylvain Bauza sba...@redhat.com wrote:

 Le 20/11/2014 10:17, Michael Still a écrit :

 Hi,

 as discussed at the summit, we want to do a better job of tracking the
 progress of work on our priorities for Kilo. To that end, we have
 agreed to discuss the current state of these at each nova meeting.

 I have created this etherpad:

  https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

 If you are the owner of a priority, please ensure it lists the reviews
 you currently require before the meeting tomorrow. If you could limit
 your entry to less than five reviews, that would be good.

 Thanks,
 Michael


 I'm anticipating a little bit, but as next week it will be Thanksgiving for
 our US peers, do you envisage any other exceptional timeslot for reviewing
 priorities ?

 As the meetings are alternating, that means that people in EU timezones
 aren't able to attend weekly meetings until Dec 11th, which is one week
 before Kilo-1 milestone.

I assume this is in reference to the Kilo-1 deadline for kilo
nova-specs to be merged?

I do hope to track the specs for all the priorities very closely.
Adding your spec reviews into the above etherpad would help me find
them.

Certainly anything on the list gets priority, so I would expect most
spec deadline exceptions to be related to items on that high priority
list.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Juan Antonio Osorio
Hi,

As a Fuel user I would like to give some input.

0) This would make fuel adhere to the standards for clients, I agree with
this change.
1) cliff is not really necessary, but is actually being adopted by some
other projects (other than OpenStack unified client) I initially proposed
it for barbican and implemented it, so if it helps, I could do the same
work here. If other people are interested in this I could submit a
blueprint.
3) What's the benefit of using oslo auth? I'm not very familiar with it;
I'm actually more familiar with other projects using keystone.

On Thu, Nov 20, 2014 at 11:01 AM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Roman,

 I am absolutely +1 for re-designing fuel client and bringing it out of
 fuel-web repo.

 If you ask me, it is also important to make new design following kind of
 standard just to avoid re-re-designing it in the foreseeable future. Some
 points here are:
 0) Rename fuelclient into python-fuelclient like any other OpenStack
 clients when moving it to a separate repo.
 1) Use cliff as a cli library. AFAIU it is a kind of unofficial standard
 for OpenStack clients for future. At least python-openstackclient uses
 cliff. Correct me if I am wrong.
 2) Follow common OpenStack practice for naming files and directories in a
 project (shell.py, api, object, etc). I am not sure whether such a common
 practice exists, but we again can follow python-openstackclient naming
 model.
 3) Use oslo for auth stuff (Fuel uses keystone at the moment) and wherever
 it is suitable.





 Vladimir Kozhukalov

 On Mon, Nov 17, 2014 at 8:08 PM, Roman Prykhodchenko 
 rprikhodche...@mirantis.com wrote:

 Hi folks!

 I’ve made several internal discussions with Łukasz Oleś and Igor
 Kalnitsky and decided that the existing Fuel Client has to be redesigned.
 The implementation of the client we have at the moment does not seem to
 be compliant with most of the use cases people have in production and
 cannot be used as a library-wrapper for FUEL’s API.

 We’ve came of with a draft of our plan about redesigning Fuel Client
 which you can see here:
 https://etherpad.openstack.org/p/fuelclient-redesign
 Everyone is welcome to add their notes, suggestions basing on their needs
 and use cases.

 The next step is to create a detailed spec and put it to everyone’s
 review.



 - romcheg


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com

All truly great thoughts are conceived by walking.
- F.N.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-20 Thread Sylvain Bauza


Le 20/11/2014 11:15, John Garbutt a écrit :

On 20 November 2014 09:25, Sylvain Bauza sba...@redhat.com wrote:

Le 20/11/2014 10:17, Michael Still a écrit :

Hi,

as discussed at the summit, we want to do a better job of tracking the
progress of work on our priorities for Kilo. To that end, we have
agreed to discuss the current state of these at each nova meeting.

I have created this etherpad:

  https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

If you are the owner of a priority, please ensure it lists the reviews
you currently require before the meeting tomorrow. If you could limit
your entry to less than five reviews, that would be good.

Thanks,
Michael


I'm anticipating a little bit, but as next week it will be Thanksgiving for
our US peers, do you envisage any other exceptional timeslot for reviewing
priorities ?

As the meetings are alternating, that means that people in EU timezones
aren't able to attend weekly meetings until Dec 11th, which is one week
before Kilo-1 milestone.

I assume this is in reference to the Kilo-1 deadline for kilo
nova-specs to be merged?


Agreed, I was unclear.


I do hope to track the specs for all the priorities very closely.
Adding your spec reviews into the above etherpad would help me find
them.


Already had 2 reviews related to specs, but as Michael gently said, I 
don't want to provide all the specs related to the scheduler effort as 
there are 9 in parallel. We're tracking our effort on a separate 
wikipage in https://wiki.openstack.org/wiki/Gantt/kilo#Tasks




Certainly anything on the list gets priority, so I would expect most
spec deadline exceptions to be related to items on that high priority
list.


I truly appreciate this.

-Sylvain

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-20 Thread Sergey Vasilenko

 Nor should it, IMO. Other than the Neutron dhcp-agent, all OpenStack
 services that run on a controller node are completely stateless.
 Therefore, I don't see any reason to use corosync/pacemaker for management
 of these resources.


I see following reasons for managing neutron agents by Pacemaker:

   - *co-location* between resources. For example, L3 and DHCP agents
   should be run only on nodes, that has properly work openvswitch (or another
   L2) agent. Then L2-agent works wrong -- L3 and DHCP agents should be
   stopped immediately. Because Neutron don't control this situation and can
   allocate some resources (router or subnet) to an agent.
   - extended *monitoring*.  Traditional OS init/upstart subsystem allow
   only simple status checking (may be exclude systemd). Now we have
   situation, for example with neutron agents, then some of agent pretends to
   well-working. But in really, do nothing. (unfortunately, openstack
   developed as is) Such agent should be immediately restarted. Our Neutron
   team now works on internal health-checking feature for agents, and I hope,
   that this will implemented in 6.1. For example we can make simple checking
   (pid found, process started) every 10sec, and more deep (RMQ connection,
   internal health-checking) -- mo rare.
   - No different business-logic for different OS. We can use one OCF
   script for Ubuntu, Centos, debian, etc...
   - handle cluster partitioning situation.



 haproxy should just spread the HTTP request load evenly across all API
 services and things should be fine, allowing haproxy's http healthcheck
 monitoring to handle the simple service status checks.


Just HTTP checking not enough. In the future will be better make more deep
checking personal for each openstack service.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] mid-cycle meet-up planning ...

2014-11-20 Thread Thierry Carrez
Jay S. Bryant wrote:
 For those of you that weren't able to make the Kilo meet-up in Paris I
 wanted to send out a note regarding Cinder's Kilo mid-cycle meet-up.
 
 IBM has offered to host it in, warm, sunny, Austin, Texas.  The planned
 dates are January 27, 28 and 29, 2015.
 
 I have put together an etherpad with the current plan and will be
 keeping the etherpad updated as we continue to firm out the details:
 https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup
 
 I need to have a good idea how many people are planning to participate
 sooner, rather than later, so that I can make sure we have a big enough
 room.  So, if you think you are going to be able to make it please add
 your name to the 'Planned Attendees' list.
 
 Again, we will also use Google Hangout to virtually include those who
 cannot be physically present.  I have a space in the etherpad to include
 your name if you wish to join that way.
 
 I look forward to another successful meet-up with all of you!

When your midcycle sprint details are ready, don't forget to add them to:

https://wiki.openstack.org/wiki/Sprints

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-20 Thread Thierry Carrez
Kyle Mestery wrote:
 We're in the process of writing a spec for this now, but we first
 wanted community feedback. Also, it's on the TC agenda for next week I
 believe, so once we get signoff from the TC, we'll propose the spec.

Frankly, I don't think the TC really has to sign-off on what seems to be
a reorganization of code within a single program. We might get involved
if this is raised as a cross-project issue, but otherwise I don't think
you need to wait for a formal TC sign-off.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-20 Thread Russell Bryant
On 11/20/2014 05:43 AM, Thierry Carrez wrote:
 Kyle Mestery wrote:
 We're in the process of writing a spec for this now, but we first
 wanted community feedback. Also, it's on the TC agenda for next week I
 believe, so once we get signoff from the TC, we'll propose the spec.
 
 Frankly, I don't think the TC really has to sign-off on what seems to be
 a reorganization of code within a single program. We might get involved
 if this is raised as a cross-project issue, but otherwise I don't think
 you need to wait for a formal TC sign-off.
 

As proposed, I agree.  I appreciate the opportunity to sanity check, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-11-20 Thread Samuel Bercovici
Hi,

Per discussion I had at OpenStack Summit/Paris with Brandon and Doug, I would 
like to remind everyone why we choose to follow a model where pools and 
listeners are shared (many to many relationships).

Use Cases:
1. The same application is being exposed via different LB objects. 
For example: users coming from the internal private organization network, 
have an LB1(private_VIP) -- Listener1(TLS) --Pool1 and user coming from the 
internet, have LB2(public_vip)--Listener1(TLS)--Pool1. 
This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) -- 
Listener1(TLS) --Pool1 and LB_v6(ipv6_VIP) -- Listener1(TLS) --Pool1
The operator would like to be able to manage the pool membership in cases of 
updates and error in a single place.

2. The same group of servers is being used via different listeners optionally 
also connected to different LB objects.
For example: users coming from the internal private organization network, 
have an LB1(private_VIP) -- Listener1(HTTP) --Pool1 and user coming from the 
internet, have LB2(public_vip)--Listener2(TLS)--Pool1. 
The LBs may use different flavors as LB2 needs TLS termination and may prefer a 
different stronger flavor.
The operator would like to be able to manage the pool membership in cases of 
updates and error in a single place.

3. The same group of servers is being used in several different L7_Policies 
connected to a listener. Such listener may be reused as in use case 1.
For example: LB1(VIP1)--Listener_L7(TLS)
|
+--L7_Policy1(rules..)--Pool1
|
+--L7_Policy2(rules..)--Pool2
|
+--L7_Policy3(rules..)--Pool1
|
+--L7_Policy3(rules..)--Reject


I think that the key issue handling correctly the provisioning state and 
the operation state in a many to many model.
This is an issue as we have attached status fields to each and every object in 
the model. 
A side effect of the above is that to understand the provisioning/operation 
status one needs to check many different objects.

To remedy this, I would like to turn all objects besides the LB to be logical 
objects. This means that the only place to manage the status/state will be on 
the LB object.
Such status should be hierarchical so that logical object attached to an LB, 
would have their status consumed out of the LB object itself (in case of an 
error).
We also need to discuss how modifications of a logical object will be 
rendered to the concrete LB objects.
You may want to revisit 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r
 the Logical Model + Provisioning Status + Operation Status + Statistics for 
a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a 
reference.

Regards,
-Sam.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing 'wrapt' to taskflow breaks Jenkins builds on stable branches

2014-11-20 Thread Mike Kolesnik
Hi, 

Currently stable branch Jenkins builds are failing due to the error: 
Syncing /opt/stack/new/taskflow/requirements-py3.txt 
'wrapt' is not a global requirement but it should be,something went wrong 

It's my understanding that this is a side effect from your change in taskflow: 
https://review.openstack.org/#/c/129507/ 

This is currently blocking (amongst other things) a backport of a security fix: 
https://review.openstack.org/#/c/135624/ 

Joshua - Would you be so kind as to investigate this? 

Kind Regards, 
Mike 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Abandon Old LBaaS V2 Review

2014-11-20 Thread Evgeny Fedoruk
Thanks for a reminder, Brandon.
Abandoned

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Thursday, November 20, 2014 12:39 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][lbaas] Abandon Old LBaaS V2 Review

Evgeny,
Since change sets got moved to the feature branch, this review has remained on 
master.  It needs to be abandoned:

https://review.openstack.org/#/c/109849/

Thanks,
Brandon

On Mon, 2014-11-17 at 12:31 -0800, Stephen Balukoff wrote:
 Awesome!
 
 On Mon, Nov 10, 2014 at 9:10 AM, Susanne Balle sleipnir...@gmail.com
 wrote:
 Works for me. Susanne
 
 On Mon, Nov 10, 2014 at 10:57 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting
 
 That is updated for lbaas and advanced services with
 the new times.
 
 Thanks,
 Brandon
 
 On Mon, 2014-11-10 at 11:07 +, Doug Wiegley wrote:
  #openstack-meeting-4
 
 
   On Nov 10, 2014, at 10:33 AM, Evgeny Fedoruk
 evge...@radware.com wrote:
  
   Thanks,
   Evg
  
   -Original Message-
   From: Doug Wiegley [mailto:do...@a10networks.com]
   Sent: Friday, November 07, 2014 9:04 PM
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [neutron][lbaas] meeting
 day/time change
  
   Hi all,
  
   Neutron LBaaS meetings are now going to be
 Tuesdays at 16:00 UTC.
  
   Safe travels.
  
   Thanks,
   Doug
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-20 Thread Jay Pipes

Hi Sergey! Comments inline.

On 11/20/2014 05:25 AM, Sergey Vasilenko wrote:

Nor should it, IMO. Other than the Neutron dhcp-agent, all OpenStack
services that run on a controller node are completely stateless.
Therefore, I don't see any reason to use corosync/pacemaker for
management of these resources.

I see following reasons for managing neutron agents by Pacemaker:


Completely agree with you here for the Neutron agents, since they are 
not the same as the other OpenStack controller services. The Neutron 
agents keep state, and therefore are appropriate for management by 
Pacemaker, IMO.



  * *co-location* between resources. For example, L3 and DHCP agents
should be run only on nodes, that has properly work openvswitch (or
another L2) agent. Then L2-agent works wrong -- L3 and DHCP agents
should be stopped immediately. Because Neutron don't control this
situation and can allocate some resources (router or subnet) to an
agent.
  * extended *monitoring*.  Traditional OS init/upstart subsystem allow
only simple status checking (may be exclude systemd). Now we have
situation, for example with neutron agents, then some of
agent pretends to well-working. But in really, do nothing.
(unfortunately, openstack developed as is) Such agent should be
immediately restarted. Our Neutron team now works on internal
health-checking feature for agents, and I hope, that this will
implemented in 6.1. For example we can make simple checking (pid
found, process started) every 10sec, and more deep (RMQ connection,
internal health-checking) -- mo rare.
  * No different business-logic for different OS. We can use one OCF
script for Ubuntu, Centos, debian, etc...
  * handle cluster partitioning situation.

haproxy should just spread the HTTP request load evenly across all
API services and things should be fine, allowing haproxy's http
healthcheck monitoring to handle the simple service status checks.

Just HTTP checking not enough. In the future will be better make more
deep checking personal for each openstack service.


For endpoints that need more than an HTTP check, absolutely, you are 
correct. But, in real life, I haven't really seen much need for more 
than an HTTP check for the controller services *except for the Neutron 
agents*.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Where should Schema files live?

2014-11-20 Thread Sandy Walsh
Hey y'all,

To avoid cross-posting, please inform your -infra / -operations buddies about 
this post. 

We've just started thinking about where notification schema files should live 
and how they should be deployed. Kind of a tricky problem.  We could really use 
your input on this problem ...

The assumptions:
1. Schema files will be text files. They'll live in their own git repo 
(stackforge for now, ideally oslo eventually). 
2. Unit tests will need access to these files for local dev
3. Gating tests will need access to these files for integration tests
4. Many different services are going to want to access these files during 
staging and production. 
5. There are going to be many different versions of these files. There are 
going to be a lot of schema updates. 

Some problems / options:
a. Unlike Python, there is no simple pip install for text files. No version 
control per se. Basically whatever we pull from the repo. The problem with a 
git clone is we need to tweak config files to point to a directory and that's a 
pain for gating tests and CD. Could we assume a symlink to some well-known 
location?
a': I suppose we could make a python installer for them, but that's a pain 
for other language consumers.
b. In production, each openstack service could expose the schema files via 
their REST API, but that doesn't help gating tests or unit tests. Also, this 
means every service will need to support exposing schema files. Big 
coordination problem.
c. In production, We could add an endpoint to the Keystone Service Catalog to 
each schema file. This could come from a separate metadata-like service. Again, 
yet-another-service to deploy and make highly available. 
d. Should we make separate distro packages? Install to a well known location 
all the time? This would work for local dev and integration testing and we 
could fall back on B and C for production distribution. Of course, this will 
likely require people to add a new distro repo. Is that a concern?

Personally, I'm leaning towards option D but I'm not sure what the implications 
are. 

We're early in thinking about these problems, but would like to start the 
conversation now to get your opinions. 

Look forward to your feedback.

Thanks
-Sandy




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Counting resources

2014-11-20 Thread Salvatore Orlando
Aloha guardians of the API!

I haven recently* reviewed a spec for neutron [1] proposing a distinct URI
for returning resource count on list operations.
This proposal is for selected neutron resources, but I believe the topic is
general enough to require a guideline for the API working group. Your
advice is therefore extremely valuable.

In a nutshell the proposal is to retrieve resource count in the following
way:
GET /prefix/resource_name/count

In my limited experience with RESTful APIs, I've never encountered one that
does counting in this way. This obviously does not mean it's a bad idea.
I think it's not great from a usability perspective to require two distinct
URIs to fetch the first page and then the total number of elements. I
reckon the first response page for a list operation might include also the
total count. For example:

{'resources': [{meh}, {meh}, {meh_again}],
 'resource_count': 55
 link_to_next_page}

I am however completely open to consider other alternatives.
What is your opinion on this matter?

Regards,
Salvatore


* it's been 10 days now

[1] https://review.openstack.org/#/c/102199/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Thierry Carrez
Hi everyone,

TL;DR:
I propose we turn the weekly project/release meeting timeslot
(Tuesdays at 21:00 UTC) into a weekly cross-project meeting, to
discuss cross-project topics with all the project leadership, rather
than keep it release-specific.


Long version:

Since the dawn of time (August 28, 2010 !), there has always been a
project meeting on Tuesdays at 21:00 UTC. It used to be a all-hands
meeting, then it turned more into a release management meeting. With the
addition of more integrated projects, all the meeting time was spent in
release status updates and there was no time to discuss project-wide
issues anymore.

During the Juno cycle, we introduced 1:1 sync points[1] for project
release liaisons (usually PTLs) to synchronize their status with the
release management team /outside/ of the meeting time. That freed time
to discuss integrated-release-wide problems and announcements during the
meeting itself.

Looking back to the Juno meetings[2], it's quite obvious that the
problems we discussed were not all release-management-related, though,
and that we had free time left. So I think it would be a good idea in
Kilo to recognize that and clearly declare that meeting the weekly
cross-project meeting. There we would discuss release-related issues if
needed, but also all the others cross-project hot topics of the day on
which a direct discussion can help making progress.

The agenda would be open (updated directly on the wiki and edited/closed
by the chair a few hours before the meeting to make sure everyone knows
what will be discussed). The chair (responsible for vetting/postponing
agenda points and keeping the discussion on schedule) could rotate.

During the Juno cycle we also introduced the concept of Cross-Project
Liaisons[3], as a way to scale the PTL duties to a larger group of
people and let new leaders emerge from our community. Those CPLs would
be encouraged to participate in the weekly cross-project meeting
(especially when a topic in their domain expertise is discussed), and
the meeting would be open to all anyway (as is the current meeting).

This is mostly a cosmetic change: update the messaging around that
meeting to make it more obvious that it's not purely about the
integrated release and that it is appropriate to put other types of
cross-project issues on the agenda. Let me know on this thread if that
sounds like a good idea, and we'll make the final call at next week
meeting :)

[1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
[2] http://eavesdrop.openstack.org/meetings/project/2014/
[3] https://wiki.openstack.org/wiki/CrossProjectLiaisons

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Russell Bryant
On 11/20/2014 08:55 AM, Thierry Carrez wrote:
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)

+1 :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Jay Pipes

On 11/20/2014 08:47 AM, Salvatore Orlando wrote:

Aloha guardians of the API!


LOL :)


I haven recently* reviewed a spec for neutron [1] proposing a distinct
URI for returning resource count on list operations.
This proposal is for selected neutron resources, but I believe the topic
is general enough to require a guideline for the API working group. Your
advice is therefore extremely valuable.

In a nutshell the proposal is to retrieve resource count in the
following way:
GET /prefix/resource_name/count

In my limited experience with RESTful APIs, I've never encountered one
that does counting in this way. This obviously does not mean it's a bad
idea.
I think it's not great from a usability perspective to require two
distinct URIs to fetch the first page and then the total number of
elements. I reckon the first response page for a list operation might
include also the total count. For example:

{'resources': [{meh}, {meh}, {meh_again}],
  'resource_count': 55
  link_to_next_page}


This is (almost) what I would suggest as well. Instead of 
link_to_next_page, I would use a _links object per JSON+HAL (see [2] 
for an ongoing discussion about this exact thing).


The rationale for my opinion is that a URI like GET 
/$collection_name/count seems to indicate that count is a subresource 
of $collection_name. But count is not a subresource, but rather an 
attribute of $collection_name itself. Therefore, it more naturally 
belongs in the returned document of GET /$collection_name.


I could also support something like:

GET /$collection_name?include_count=1

that would use a query parameter to trigger whether to include the 
somewhat expensive operation to return the total count of records that 
would be returned without any limit due to pagination.


Note that this is the same reason why I despise the:

GET /$collection_name/detail

stuff in the Nova API. detail is not a subresource. It's simply a 
trigger to show more detailed information in the response of GET 
/$collection_name, and therefore should be either a query parameter 
(e.g. ?detailed=1) or should not exist at all.


Best,
-jay

[2] 
https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst



I am however completely open to consider other alternatives.
What is your opinion on this matter?

Regards,
Salvatore


* it's been 10 days now

[1] https://review.openstack.org/#/c/102199/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Sean Dague
On 11/20/2014 09:12 AM, Jay Pipes wrote:
 On 11/20/2014 08:47 AM, Salvatore Orlando wrote:
 Aloha guardians of the API!
 
 LOL :)
 
 I haven recently* reviewed a spec for neutron [1] proposing a distinct
 URI for returning resource count on list operations.
 This proposal is for selected neutron resources, but I believe the topic
 is general enough to require a guideline for the API working group. Your
 advice is therefore extremely valuable.

 In a nutshell the proposal is to retrieve resource count in the
 following way:
 GET /prefix/resource_name/count

 In my limited experience with RESTful APIs, I've never encountered one
 that does counting in this way. This obviously does not mean it's a bad
 idea.
 I think it's not great from a usability perspective to require two
 distinct URIs to fetch the first page and then the total number of
 elements. I reckon the first response page for a list operation might
 include also the total count. For example:

 {'resources': [{meh}, {meh}, {meh_again}],
   'resource_count': 55
   link_to_next_page}
 
 This is (almost) what I would suggest as well. Instead of
 link_to_next_page, I would use a _links object per JSON+HAL (see [2]
 for an ongoing discussion about this exact thing).
 
 The rationale for my opinion is that a URI like GET
 /$collection_name/count seems to indicate that count is a subresource
 of $collection_name. But count is not a subresource, but rather an
 attribute of $collection_name itself. Therefore, it more naturally
 belongs in the returned document of GET /$collection_name.
 
 I could also support something like:
 
 GET /$collection_name?include_count=1

It feels like the right thing is a standard 'more' definition included
in the next link:

{'resources': [{meh}, {meh}, {meh_again}],
 'next': {link: , total: somebignumber}}

Or the way gerrit does it:

...
{project:openstack/nova,branch:stable/icehouse,topic:bug/1250751,id:I2755c59b4db736151000dae351fd776d3c15ca39,number:124161,subject:Improve
shared storage checks for live migration,owner:{name:Jonathan
Proulx,email:j...@jonproulx.com,username:jproulx},url:https://review.openstack.org/124161,commitMessage:Improve
shared storage checks for live migration\n\nDue to an assumption that
libvirt live migrations work only when both\ninstance path and disk data
is shared between source and destination\nhosts (e.g. libvirt instances
directory is on NFS), instance disks are\nremoved from shared storage
when instance path is not shared (e.g. Ceph\nRBD backend is
enabled).\n\nDistinguish cases that require shared instance drive and
shared libvirt\ninstance directory. Reflect the fact that RBD backed
instances have\nshared instance drive (and no shared libvirt instance
directory) in the\nrelevant conditionals.\n\nUpgradeImpact: Live
migrations from or to a compute host running a\nversion of Nova
pre-dating this commit are disabled in order to\neliminate possibility
of data loss. Upgrade Nova on both the source and\nthe target node
before attempting a live migration.\n\nCloses-bug: 1250751\nCloses-bug:
1314526\nCo-authored-by: Ryan Moe
\u003cr...@mirantis.com\u003e\nCo-authored-by: Yaguang Tang
\u003cyaguang.t...@canonical.com\u003e\nSigned-off-by: Dmitry Borodaenko
\u003cdborodae...@mirantis.com\u003e\nChange-Id:
I2755c59b4db736151000dae351fd776d3c15ca39\n(cherry picked from commit
bc45c56f102cdef58840e02b609a89f5278e8cce)\n,createdOn:1411675113,lastUpdated:1415792620,sortKey:0031135f0001e501,open:true,status:NEW}
{project:openstack/nova,branch:stable/icehouse,id:I707a497bf534a88d55ba387b3f24f5eda7171f3a,number:126212,subject:Translation
update for German,owner:{name:Thomas
Goirand,email:tho...@goirand.fr,username:thomas-goirand},url:https://review.openstack.org/126212,commitMessage:Translation
update for German\n\nThis patch has been sent to the Debian bug tracker,
and I am\nnow forwarding it upstream.\n\nChange-Id:
I707a497bf534a88d55ba387b3f24f5eda7171f3a\n,createdOn:1412565399,lastUpdated:1415792006,sortKey:003113550001ed04,open:true,status:NEW}
{type:stats,rowCount:500,runTimeMilliseconds:758}

Basically a tail record that would tell you some additionally useful
information about the query, that includes rows returned, timing, next
link, and estimated total rows (knowing that's not possible to be accurate)

 that would use a query parameter to trigger whether to include the
 somewhat expensive operation to return the total count of records that
 would be returned without any limit due to pagination.
 
 Note that this is the same reason why I despise the:
 
 GET /$collection_name/detail

Agreed.

 stuff in the Nova API. detail is not a subresource. It's simply a
 trigger to show more detailed information in the response of GET
 /$collection_name, and therefore should be either a query parameter
 (e.g. ?detailed=1) or should not exist at all.
 
 Best,
 -jay
 
 [2]
 https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst
 
 
 I am however completely open to consider other 

Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Christopher Yeoh
On Thu, 20 Nov 2014 14:47:16 +0100
Salvatore Orlando sorla...@nicira.com wrote:

 Aloha guardians of the API!
 
 I haven recently* reviewed a spec for neutron [1] proposing a
 distinct URI for returning resource count on list operations.
 This proposal is for selected neutron resources, but I believe the
 topic is general enough to require a guideline for the API working
 group. Your advice is therefore extremely valuable.
 
 In a nutshell the proposal is to retrieve resource count in the
 following way:
 GET /prefix/resource_name/count
 
 In my limited experience with RESTful APIs, I've never encountered
 one that does counting in this way. This obviously does not mean it's
 a bad idea. I think it's not great from a usability perspective to
 require two distinct URIs to fetch the first page and then the total
 number of elements. I reckon the first response page for a list
 operation might include also the total count. For example:
 
 {'resources': [{meh}, {meh}, {meh_again}],
  'resource_count': 55
  link_to_next_page}
 
 I am however completely open to consider other alternatives.
 What is your opinion on this matter?

FWIW there is a nova spec proposed for counting resources as
well (I think it might have been previously approved for Juno). 

https://review.openstack.org/#/c/134279/

I haven't compared the two, but I can't think of a reason we'd
need to be any different between projects here.

Regards,

Chris

 
 Regards,
 Salvatore
 
 
 * it's been 10 days now
 
 [1] https://review.openstack.org/#/c/102199/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-11-20 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate.

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 10:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Dean Troyer
[pardon my jumping in like kibo, someone mentioned 'client' ;) ]

On Thu, Nov 20, 2014 at 3:01 AM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 0) Rename fuelclient into python-fuelclient like any other OpenStack
 clients when moving it to a separate repo.
 1) Use cliff as a cli library. AFAIU it is a kind of unofficial standard
 for OpenStack clients for future. At least python-openstackclient uses
 cliff. Correct me if I am wrong.


Neutron client also used cliff, but in a different manner than
OpenStackClient; just using cliff doesn't necessarily make things work
similar to other clients.


 2) Follow common OpenStack practice for naming files and directories in a
 project (shell.py, api, object, etc). I am not sure whether such a common
 practice exists, but we again can follow python-openstackclient naming
 model.


OSC's model is to put the CLI command handlers in
openstackclient.api-name.vmajor-version and, where necessary, the REST
API in openstackclient.api.api-name_vmajor-version.

3) Use oslo for auth stuff (Fuel uses keystone at the moment) and wherever
 it is suitable.


Are you referring to the apiclient in Oslo Incubator?  I believe that has
been tagged to not graduate and left as-is for now.  I would suggest using
the Keystone client Session and authentication plugins if you want a
stand-alone client as you will get SAML auth, and whatever else is being
developed, for free.

Alternatively you could write your client as just a library and add an
OpenStackClient plugin[0] to leverage the existing CLI.  In this case, OSC
handles all of the auth and session overhead, you just worry about the REST
handlers.

dt

[0] https://github.com/dtroyer/python-oscplugin is the original template,
https://git.openstack.org/cgit/stackforge/python-congressclient/ is a
current example of a lib+plugin repo.

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Roman Prykhodchenko
So as I mentioned in the etherpad I published a spec here 
https://review.openstack.org/#/c/135915/ 
https://review.openstack.org/#/c/135915/
Everyone is welcome to review it.

@Juan: For sure we should use keystone for auth. I might have stated it wrong. 
What I mean is that we should use similar approaches for interacting with 
end-user as most of openstack clients do.


 On 20 Nov 2014, at 11:21, Juan Antonio Osorio jaosor...@gmail.com wrote:
 
 Hi,
 
 As a Fuel user I would like to give some input.
 
 0) This would make fuel adhere to the standards for clients, I agree with 
 this change.
 1) cliff is not really necessary, but is actually being adopted by some other 
 projects (other than OpenStack unified client) I initially proposed it for 
 barbican and implemented it, so if it helps, I could do the same work here. 
 If other people are interested in this I could submit a blueprint.
 3) What's the benefit of using oslo auth? I'm not very familiar with it; I'm 
 actually more familiar with other projects using keystone.
 
 On Thu, Nov 20, 2014 at 11:01 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com mailto:vkozhuka...@mirantis.com wrote:
 Roman,
 
 I am absolutely +1 for re-designing fuel client and bringing it out of 
 fuel-web repo. 
 
 If you ask me, it is also important to make new design following kind of 
 standard just to avoid re-re-designing it in the foreseeable future. Some 
 points here are:
 0) Rename fuelclient into python-fuelclient like any other OpenStack clients 
 when moving it to a separate repo.
 1) Use cliff as a cli library. AFAIU it is a kind of unofficial standard for 
 OpenStack clients for future. At least python-openstackclient uses cliff. 
 Correct me if I am wrong.  
 2) Follow common OpenStack practice for naming files and directories in a 
 project (shell.py, api, object, etc). I am not sure whether such a common 
 practice exists, but we again can follow python-openstackclient naming model.
 3) Use oslo for auth stuff (Fuel uses keystone at the moment) and wherever it 
 is suitable.
 
 
 
 
 
 Vladimir Kozhukalov
 
 On Mon, Nov 17, 2014 at 8:08 PM, Roman Prykhodchenko 
 rprikhodche...@mirantis.com mailto:rprikhodche...@mirantis.com wrote:
 Hi folks!
 
 I’ve made several internal discussions with Łukasz Oleś and Igor Kalnitsky 
 and decided that the existing Fuel Client has to be redesigned.
 The implementation of the client we have at the moment does not seem to be 
 compliant with most of the use cases people have in production and cannot be 
 used as a library-wrapper for FUEL’s API.
 
 We’ve came of with a draft of our plan about redesigning Fuel Client which 
 you can see here: https://etherpad.openstack.org/p/fuelclient-redesign 
 https://etherpad.openstack.org/p/fuelclient-redesign
 Everyone is welcome to add their notes, suggestions basing on their needs and 
 use cases.
 
 The next step is to create a detailed spec and put it to everyone’s review.
 
 
 
 - romcheg
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Juan Antonio Osorio R.
 e-mail: jaosor...@gmail.com mailto:jaosor...@gmail.com
 
 All truly great thoughts are conceived by walking.
 - F.N.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Ian Wells
On 19 November 2014 17:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 Folks,

 Like Ian, I am jumping in this very late as well - as I decided to travel
 Europe after the summit, just returned back and  catching up :-):-)

 I have noticed that this thread has gotten fairly convoluted and painful
 to read.

 I think Armando summed it up well in the beginning of the thread. There
 are basically three written proposals (listed in Armando's email - I pasted
 them again here).

 [1] https://review.openstack.org/#/c/134179/
 [2] https://review.openstack.org/#/c/100278/
 [3] https://review.openstack.org/#/c/93613/

 On this thread I see that the authors of first two proposals have already
 agreed to consolidate and work together. This leaves with two proposals.
 Both Ian and I were involved with the third proposal [3] and have
 reasonable idea about it. IMO, the use cases addressed by the third
 proposal are very similar to use cases addressed by proposal [1] and [2]. I
 can volunteer to  follow up with Racha and Stephen from Ericsson to see if
 their use case will be covered with the new combined proposal. If yes, we
 have one converged proposal. If no, then we modify the proposal to
 accommodate their use case as well. Regardless, I will ask them to review
 and post their comments on [1].

 Having said that, this covers what we discussed during the morning session
 on Friday in Paris. Now, comes the second part which Ian brought up in the
 afternoon session on Friday.
 My initial reaction was, when heard his use case, that this new
 proposal/API should cover that use case as well (I am being bit optimistic
 here :-)). If not, rather than going into the nitty gritty details of the
 use case, let's see what modification is required to the proposed API to
 accommodate Ian's use case and adjust it accordingly.


As far as I can see, the question of whether you mark a network as 'edge'
and therefore bridged to something you don't know about (my proposal) or
whether you attach a block to it that, behinds the scenes, bridges to
something you don't know about (Maruti's, if you take out all of the
details of *what* is being attached to from the API) are basically as good
as each other.

My API parallels the way that provider networks are used, because that's
what I had in mind at the time; Maruti's uses a block rather than marking
the network, and the only real difference that makes is that (a) you can
attach many networks to one block (which doesn't really seem to bring
anything special) and (b) uses a port to connect to the network (which is
not massively helpful because there's nothing sensible you can put on the
port; there may be many things behind the gateway).  At this point it
becomes a completely religious argument about which is better.  I still
prefer mine, from gut feel, but they are almost exactly equivalent at this
point.

Taking your statement above of 'let's take out the switch port stuff' then
Maruti's use case would need to explain where that data goes. The point I
made is that it becomes a Sisyphean task (endless and not useful) to
introduce a data model and API to introduce this into Neutron via an API
and that's what I didn't want to do.  Can we address that question?

-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Duplicate bugs vs reopening unrelated bugs

2014-11-20 Thread Anastasia Urlapova
Dmitry,
thank you that you raised this important discussion and added section @Test
and report bug.
From my side I want notice, that we have many unclear bugs and sometimes is
very hard to verify such bugs.
The awesome example: https://bugs.launchpad.net/fuel/+bug/1357298
there are many questions about verification:
- configuration
- env/iso version
- why this option @haven't be used here
- where is it should be
and so on.

Discussion about issue template you can find in community @[openstack-dev] [
QA] Proposal: A launchpad bug description template
will be great if we will use such template in our launchpad at least as
good practice while it are integrating in community.

Thank you,
 Nastya.

On Wed, Nov 19, 2014 at 4:30 AM, Dmitry Borodaenko dborodae...@mirantis.com
 wrote:

 Fuelers,

 I've added the following paragraph into Fuel's How to contribute Wiki page:
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Test_and_report_bugs

 Before creating a new bug, it's always a good idea to check if a bug
 like that already exists in the project, and update an existing bug
 instead of creating something that will later have to be discarded as
 a duplicate. However, it is important to make sure that both bugs are
 really related, dealing with a duplicate is much easier than
 untangling several unrelated lines of investigation mixed into one
 bug. Do not ever reopen bugs with generic catch-all titles like
 Horizon crashed or HA doesn't work that can cover whole ranges of
 root causes. Do not create new bugs with such titles, either: be as
 specific about the nature of the problem as you can.

 Overly generic bugs and the related problem of reopening or hijacking
 unrelated bugs has been an anti-pattern in our Launchpad bugs lately,
 please take above into consideration when reporting bugs.

 Thank you,

 --
 Dmitry Borodaenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Anita Kuno
On 11/20/2014 09:04 AM, Russell Bryant wrote:
 On 11/20/2014 08:55 AM, Thierry Carrez wrote:
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)
 
 +1 :-)
 
I too think this is a good idea.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Roman Prykhodchenko
Dean, thank you for your input.

 On 20 Nov 2014, at 15:43, Dean Troyer dtro...@gmail.com wrote:
 
 [pardon my jumping in like kibo, someone mentioned 'client' ;) ]

Now I know for sure how to summon you :)

 
 On Thu, Nov 20, 2014 at 3:01 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com mailto:vkozhuka...@mirantis.com wrote:
 0) Rename fuelclient into python-fuelclient like any other OpenStack clients 
 when moving it to a separate repo.
 1) Use cliff as a cli library. AFAIU it is a kind of unofficial standard for 
 OpenStack clients for future. At least python-openstackclient uses cliff. 
 Correct me if I am wrong.  
 
 Neutron client also used cliff, but in a different manner than 
 OpenStackClient; just using cliff doesn't necessarily make things work 
 similar to other clients.

To be honest I didn’t look at internals of OpenStackClient but have some 
experience with Ironic client instead. Anyway, this is still a topic which need 
more discussion and investigation.

  
 2) Follow common OpenStack practice for naming files and directories in a 
 project (shell.py, api, object, etc). I am not sure whether such a common 
 practice exists, but we again can follow python-openstackclient naming model.
 
 OSC's model is to put the CLI command handlers in 
 openstackclient.api-name.vmajor-version and, where necessary, the REST 
 API in openstackclient.api.api-name_vmajor-version.

For the versioning we’ll have to use slightly different approach due to a 
different versioning in Fuel, I’m afraid. However, I agree that we should 
follow naming conventions and practices from other OpenStack projects in order 
to make it easier to other folks to dive in.

 
 3) Use oslo for auth stuff (Fuel uses keystone at the moment) and wherever it 
 is suitable.
 
 Are you referring to the apiclient in Oslo Incubator?  I believe that has 
 been tagged to not graduate and left as-is for now.  I would suggest using 
 the Keystone client Session and authentication plugins if you want a 
 stand-alone client as you will get SAML auth, and whatever else is being 
 developed, for free. 

That sounds like a good idea.

 
 Alternatively you could write your client as just a library and add an 
 OpenStackClient plugin[0] to leverage the existing CLI.  In this case, OSC 
 handles all of the auth and session overhead, you just worry about the REST 
 handlers.

I think this is not very reasonable because Fuel does not provide any service 
in OpenStack but instead allows to set up and manage OpenStack clusters.

 
 dt
 
 [0] https://github.com/dtroyer/python-oscplugin 
 https://github.com/dtroyer/python-oscplugin is the original template, 
 https://git.openstack.org/cgit/stackforge/python-congressclient/ 
 https://git.openstack.org/cgit/stackforge/python-congressclient/ is a 
 current example of a lib+plugin repo.
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Online midcycle meetup

2014-11-20 Thread Brad Topol
Angus,

This may sound crazy but  what if in addition to having the online meetup 
you denoted two different locations as an optional physical meetup?   That 
way you would get some of the benefits of having folks meet together in 
person while not forcing everyone to have to travel across the globe. So 
for example, if you had one location in Raleigh and one wherever else 
folks are co-located  you could still get the benefits of having some 
group of folks collaborating face to face.

Just a thought.

--Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Angus Salkeld asalk...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   11/19/2014 06:56 PM
Subject:[openstack-dev] [Heat] Online midcycle meetup



Hi all

As agreed from our weekly meeting we are going to try an online meetup.

Why?

We did a poll (https://doodle.com/b9m4bf8hvm3mna97#table) and it is
split quite evenly by location. The story I am getting from the community 
is:

We want a midcycle meetup if it is nearby, but are having trouble getting 
finance
to travel far.

Given that the Heat community is evenly spread across the globe this 
becomes
impossible to hold without excluding a significant group.

So let's try and figure out how to do an online meetup!
(but let's not spend 99% of the time arguing about the software to use 
please)

I think more interesting is:

1) How do we minimize the time zone pain?
2) Can we make each session really focused so we are productive.
3) If we do this right it does not have to be midcycle but when ever we 
want.

I'd be interested in feedback from others that have tried this too.

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Morgan Fainberg
On Thursday, November 20, 2014, Anita Kuno ante...@anteaya.info wrote:

 On 11/20/2014 09:04 AM, Russell Bryant wrote:
  On 11/20/2014 08:55 AM, Thierry Carrez wrote:
  This is mostly a cosmetic change: update the messaging around that
  meeting to make it more obvious that it's not purely about the
  integrated release and that it is appropriate to put other types of
  cross-project issues on the agenda. Let me know on this thread if that
  sounds like a good idea, and we'll make the final call at next week
  meeting :)
 
  +1 :-)
 
 I too think this is a good idea.


+1 on this plan, especially involving the CPLs in the meeting since it
isn't strictly a release management meeting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Ruby Loo
Hi, we had an interesting discussion on IRC about whether or not we should
be maintaining backwards compatibility within a release cycle. In this
particular case, we introduced a new decorator in this kilo cycle, and were
discussing the renaming of it, and whether it needed to be backwards
compatible to not break any out-of-tree driver using master.

Some of us (ok, me or I) think it doesn't make sense to make sure that
everything we do is backwards compatible. Others disagree and think we
should, or at least strive for 'must be' backwards compatible with the
caveat that there will be cases where this isn't
feasible/possible/whatever. (I hope I captured that correctly.)

Although I can see the merit (well, sort of) of trying our best, trying
doesn't mean 'must', and if it is 'must', who decides what can be exempted
from this, and how will we communicate what is exempted, etc?

Thoughts?

--ruby
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [storyboard] Goodbye Infra on Launchpad, Hello Infra on StoryBoard

2014-11-20 Thread Brad Topol
This looks very cool!!!  Is it ready for the rest of the other projects to 
start using as well?

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Michael Krotscheck krotsch...@gmail.com
To: openstack-in...@lists.openstack.org, OpenStack Development 
Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, openst...@lists.openstack.org, 
Date:   11/19/2014 07:23 PM
Subject:[openstack-dev] [infra] [storyboard] Goodbye Infra on 
Launchpad,  Hello Infra on StoryBoard



The OpenStack Infrastructure team has successfully migrated all of the 
openstack-infra project bugs from LaunchPad to StoryBoard. With the 
exception of openstack-ci bugs tracked by elastic recheck, all bugs, 
tickets, and work tracked for OpenStack Infrastructure projects must now 
be submitted and accessed at https://storyboard.openstack.org. If you file 
a ticket on LaunchPad, the Infrastructure team no longer guarantees that 
it will be addressed. Note that only the infrastructure projects have 
moved, no other OpenStack projects have been migrated.

This is part of a long-term plan to migrate OpenStack from Launchpad to 
StoryBoard.  At this point we feel that StoryBoard meets the needs of the 
OpenStack infrastructure team and plan to use this migration to further 
exercise the project while we continue its development.

As you may notice, Development on StoryBoard is ongoing, and we have not 
yet reached feature parity with those parts of LaunchPad which are needed 
for the rest of OpenStack. Contributions are always welcome, and the team 
may be contacted in the #storyboard or #openstack-infra channels on 
freenode, via the openstack-dev list using the [storyboard] subject, or 
via StoryBoard itself by creating a story. Feel free to report any bugs, 
ask any questions, or make any improvement suggestions that you come up 
with at: https://storyboard.openstack.org/#!/project/456

We are always looking for more contributors! If you have skill in 
AngularJS or Pecan, or would like to fill in some of our documentation for 
us, we are happy to accept patches. If your project is interested in 
moving to StoryBoard, please contact us directly. While we are hesitant to 
move new projects to storyboard at this point, we would love working with 
you to determine which features are needed to support you.

Relevant links:
• Storyboard: https://storyboard.openstack.org
• Team Wiki: https://wiki.openstack.org/wiki/StoryBoard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] [storyboard] Goodbye Infra on Launchpad, Hello Infra on StoryBoard

2014-11-20 Thread Michael Krotscheck
Hey there, Brad!

The answer to your question depends on who you ask. If you ask me, I will say: 
Yes! Feel free to use it all you want! If you ask Thierry or Jim Blair, their 
response will be “Oh please no”.

Our roadmap right now is as follows: 
- 1.1.1: Clean up straggling features from our infra release (most notably 
email and tags).
- 1.2: Support “Feature development”, the ability to track a larger body of 
work and group it into releases.
- 1.3: Support “Bug reporting”, the ability to submit a bug and have it follow 
a triage workflow.

I encourage you to keep tabs on our wiki where we’re doing feature tracking 
(until we can do it in storyboard proper), and judge for yourself when the 
implemented features are sufficient to support your project :). Here’s a link: 
https://wiki.openstack.org/wiki/StoryBoard 
https://wiki.openstack.org/wiki/StoryBoard

Michael

 On Nov 20, 2014, at 7:45 AM, Brad Topol bto...@us.ibm.com wrote:
 
 This looks very cool!!!  Is it ready for the rest of the other projects to 
 start using as well? 
 
 Thanks, 
 
 Brad 
 
 
 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift Browser 0.2.0 released

2014-11-20 Thread Martin Geisler
Hi all,

Swift Browser 0.2.0 has been released:

  https://github.com/zerovm/swift-browser/releases/tag/0.2.0

This release adds support for copying objects and deleting containers.
Furthermore, the JavaScript and CSS files are now concatenated and
minified, resulting in a faster page load.

You can find a demo here:

  http://www.zerovm.org/swift-browser/

(Please let me know if these announcements become too frequent. I expect
to release minor versions in 2-4 week intervals in the future.)

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpuSTTtDrLis.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-20 Thread Kyle Mestery
On Thu, Nov 20, 2014 at 6:42 AM, Russell Bryant rbry...@redhat.com wrote:
 On 11/20/2014 05:43 AM, Thierry Carrez wrote:
 Kyle Mestery wrote:
 We're in the process of writing a spec for this now, but we first
 wanted community feedback. Also, it's on the TC agenda for next week I
 believe, so once we get signoff from the TC, we'll propose the spec.

 Frankly, I don't think the TC really has to sign-off on what seems to be
 a reorganization of code within a single program. We might get involved
 if this is raised as a cross-project issue, but otherwise I don't think
 you need to wait for a formal TC sign-off.


 As proposed, I agree.  I appreciate the opportunity to sanity check, though.

Cool, thanks for the sanity check here. We'll proceed with the spec
and move forward with the proposal.

Kyle

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Salvatore Orlando
The Nova proposal appears to be identical to neutron's, at least from a
consumer perspective.

If I were to pick a winner, I'd follow Sean's advice regarding the 'more'
attribute in responses, and put the total number of resources there; I
would also take Jay's advice of including the total only if requested with
a query param. In this way a user can retrieve the total number of items
regardless of the current pagination index (in my first post I suggested
the total number should be returned only on the first page of results).

Therefore one could ask for a total number of resources with something like
the following:

GET /some_resources?include_total=1

and obtain a response like the following:

{'resources': [{meh}, {meh}, {meh_again}],
  'something': {
   '_links': {'prev': ..., 'next': ...},
   'total': agazillion}
 }

 where the exact structure and naming of 'something' depends on the outcome
of the discussion at [1]

Salvatore

[1]
https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst

On 20 November 2014 15:24, Christopher Yeoh cbky...@gmail.com wrote:

 On Thu, 20 Nov 2014 14:47:16 +0100
 Salvatore Orlando sorla...@nicira.com wrote:

  Aloha guardians of the API!
 
  I haven recently* reviewed a spec for neutron [1] proposing a
  distinct URI for returning resource count on list operations.
  This proposal is for selected neutron resources, but I believe the
  topic is general enough to require a guideline for the API working
  group. Your advice is therefore extremely valuable.
 
  In a nutshell the proposal is to retrieve resource count in the
  following way:
  GET /prefix/resource_name/count
 
  In my limited experience with RESTful APIs, I've never encountered
  one that does counting in this way. This obviously does not mean it's
  a bad idea. I think it's not great from a usability perspective to
  require two distinct URIs to fetch the first page and then the total
  number of elements. I reckon the first response page for a list
  operation might include also the total count. For example:
 
  {'resources': [{meh}, {meh}, {meh_again}],
   'resource_count': 55
   link_to_next_page}
 
  I am however completely open to consider other alternatives.
  What is your opinion on this matter?

 FWIW there is a nova spec proposed for counting resources as
 well (I think it might have been previously approved for Juno).

 https://review.openstack.org/#/c/134279/

 I haven't compared the two, but I can't think of a reason we'd
 need to be any different between projects here.

 Regards,

 Chris

 
  Regards,
  Salvatore
 
 
  * it's been 10 days now
 
  [1] https://review.openstack.org/#/c/102199/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Lucas Alvares Gomes
Hi Ruby,

Thank you for putting this up.

I'm one of the ones think we should try hard (even really hard) to
maintain the compatibility on every commit. I understand that it may
sound naive because I'm sure that sometimes we will break things, but
that doesn't means we shouldn't try.

There may be people running Ironic in a continuous deployment
environment, those are the users of the project and therefor the most
important part of Ironic. Doesn't matter how well written Ironic code
may be if nobody is using it. If we break that user workflow and he's
unhappy that's the ultimate failure.

I also understand that in the project POV we want to have fast
interactions and shiny new features as quick as possible and trying to
be backward compatibility all the time - on every commit - might slow
that down. But in the user POV I believe that he doesn't care much
about all the new features, he would mostly care about the things that
used to work to continue to work for him.

Also the backwards approach between releases and not commits might
work fine in the non-opensource world where the code is kept indoors
until the software is release, but in the opensource world where the
code is out to people to use it all the time it doesn't seem to work
that well.

That's my 2 cents.

Lucas

On Thu, Nov 20, 2014 at 3:38 PM, Ruby Loo rlooya...@gmail.com wrote:
 Hi, we had an interesting discussion on IRC about whether or not we should
 be maintaining backwards compatibility within a release cycle. In this
 particular case, we introduced a new decorator in this kilo cycle, and were
 discussing the renaming of it, and whether it needed to be backwards
 compatible to not break any out-of-tree driver using master.

 Some of us (ok, me or I) think it doesn't make sense to make sure that
 everything we do is backwards compatible. Others disagree and think we
 should, or at least strive for 'must be' backwards compatible with the
 caveat that there will be cases where this isn't feasible/possible/whatever.
 (I hope I captured that correctly.)

 Although I can see the merit (well, sort of) of trying our best, trying
 doesn't mean 'must', and if it is 'must', who decides what can be exempted
 from this, and how will we communicate what is exempted, etc?

 Thoughts?

 --ruby

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Morgan Fainberg
The only thing I want to caution against is making a SQL-specific choice. In 
the case of some other backends, it may not be possible (for an extremely large 
dataset) to get a full count, where SQL does this fairly elegantly. For 
example, LDAP (in some cases) may have an administrative limit that will say 
that no more than 10,000 entries would be returned; likely you’re going to have 
an issue, since you need to issue the query and see how many things match, if 
you hit the overall limit you’ll get the same count every time (but possibly a 
different dataset).

I want to be very careful that we’re not recommending functionality as a 
baseline that should be used as a pattern across all similar APIs, especially 
since we have some backends/storage systems that can’t elegantly always support 
it.

Personally, I like Gerrit’s model (as Sean described) - with the above caveat 
that not all backends support this type of count.

Cheers,
Morgan

 On Nov 20, 2014, at 8:04 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
 The Nova proposal appears to be identical to neutron's, at least from a 
 consumer perspective.
 
 If I were to pick a winner, I'd follow Sean's advice regarding the 'more' 
 attribute in responses, and put the total number of resources there; I would 
 also take Jay's advice of including the total only if requested with a query 
 param. In this way a user can retrieve the total number of items regardless 
 of the current pagination index (in my first post I suggested the total 
 number should be returned only on the first page of results).
 
 Therefore one could ask for a total number of resources with something like 
 the following:
 
 GET /some_resources?include_total=1
 
 and obtain a response like the following:
 
 {'resources': [{meh}, {meh}, {meh_again}],
   'something': {
'_links': {'prev': ..., 'next': ...},
'total': agazillion}
  }
 
  where the exact structure and naming of 'something' depends on the outcome 
 of the discussion at [1]
 
 Salvatore
 
 [1] 
 https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst
  
 https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst
 
 On 20 November 2014 15:24, Christopher Yeoh cbky...@gmail.com 
 mailto:cbky...@gmail.com wrote:
 On Thu, 20 Nov 2014 14:47:16 +0100
 Salvatore Orlando sorla...@nicira.com mailto:sorla...@nicira.com wrote:
 
  Aloha guardians of the API!
 
  I haven recently* reviewed a spec for neutron [1] proposing a
  distinct URI for returning resource count on list operations.
  This proposal is for selected neutron resources, but I believe the
  topic is general enough to require a guideline for the API working
  group. Your advice is therefore extremely valuable.
 
  In a nutshell the proposal is to retrieve resource count in the
  following way:
  GET /prefix/resource_name/count
 
  In my limited experience with RESTful APIs, I've never encountered
  one that does counting in this way. This obviously does not mean it's
  a bad idea. I think it's not great from a usability perspective to
  require two distinct URIs to fetch the first page and then the total
  number of elements. I reckon the first response page for a list
  operation might include also the total count. For example:
 
  {'resources': [{meh}, {meh}, {meh_again}],
   'resource_count': 55
   link_to_next_page}
 
  I am however completely open to consider other alternatives.
  What is your opinion on this matter?
 
 FWIW there is a nova spec proposed for counting resources as
 well (I think it might have been previously approved for Juno).
 
 https://review.openstack.org/#/c/134279/ 
 https://review.openstack.org/#/c/134279/
 
 I haven't compared the two, but I can't think of a reason we'd
 need to be any different between projects here.
 
 Regards,
 
 Chris
 
 
  Regards,
  Salvatore
 
 
  * it's been 10 days now
 
  [1] https://review.openstack.org/#/c/102199/ 
  https://review.openstack.org/#/c/102199/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Kevin L. Mitchell
On Thu, 2014-11-20 at 08:28 -0800, Morgan Fainberg wrote:
 The only thing I want to caution against is making a SQL-specific
 choice. In the case of some other backends, it may not be possible
 (for an extremely large dataset) to get a full count, where SQL does
 this fairly elegantly. For example, LDAP (in some cases) may have an
 administrative limit that will say that no more than 10,000 entries
 would be returned; likely you’re going to have an issue, since you
 need to issue the query and see how many things match, if you hit the
 overall limit you’ll get the same count every time (but possibly a
 different dataset).

Hmmm…interesting limitation.

 I want to be very careful that we’re not recommending functionality as
 a baseline that should be used as a pattern across all similar APIs,
 especially since we have some backends/storage systems that can’t
 elegantly always support it.
 
 
 Personally, I like Gerrit’s model (as Sean described) - with the above
 caveat that not all backends support this type of count.

How about if we include some sentinel that can be used to indicate that
count is unsupported, to cover cases such as an LDAP backend?  That
could be as simple as allowing a null value.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Sean Dague
I'm looking at the Nova spec, and it seems very taylored to a specific
GUI. I'm also not sure that 17128 errors is more useful than 500+ errors
when presenting to the user (the following in my twitter stream made me
think about that this morning -
https://twitter.com/NINK/status/535299029383380992)

500+ also better describes the significant figures we're talking about here.

-Sean

On 11/20/2014 11:28 AM, Morgan Fainberg wrote:
 The only thing I want to caution against is making a SQL-specific
 choice. In the case of some other backends, it may not be possible (for
 an extremely large dataset) to get a full count, where SQL does this
 fairly elegantly. For example, LDAP (in some cases) may have an
 administrative limit that will say that no more than 10,000 entries
 would be returned; likely you’re going to have an issue, since you need
 to issue the query and see how many things match, if you hit the overall
 limit you’ll get the same count every time (but possibly a different
 dataset).
 
 I want to be very careful that we’re not recommending functionality as a
 baseline that should be used as a pattern across all similar APIs,
 especially since we have some backends/storage systems that can’t
 elegantly always support it.
 
 Personally, I like Gerrit’s model (as Sean described) - with the above
 caveat that not all backends support this type of count.
 
 Cheers,
 Morgan
 
 On Nov 20, 2014, at 8:04 AM, Salvatore Orlando sorla...@nicira.com
 mailto:sorla...@nicira.com wrote:

 The Nova proposal appears to be identical to neutron's, at least from
 a consumer perspective.

 If I were to pick a winner, I'd follow Sean's advice regarding the
 'more' attribute in responses, and put the total number of resources
 there; I would also take Jay's advice of including the total only if
 requested with a query param. In this way a user can retrieve the
 total number of items regardless of the current pagination index (in
 my first post I suggested the total number should be returned only on
 the first page of results).

 Therefore one could ask for a total number of resources with something
 like the following:

 GET /some_resources?include_total=1

 and obtain a response like the following:

 {'resources': [{meh}, {meh}, {meh_again}],
   'something': {
'_links': {'prev': ..., 'next': ...},
'total': agazillion}
  }

  where the exact structure and naming of 'something' depends on the
 outcome of the discussion at [1]

 Salvatore

 [1] 
 https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst

 On 20 November 2014 15:24, Christopher Yeoh cbky...@gmail.com
 mailto:cbky...@gmail.com wrote:

 On Thu, 20 Nov 2014 14:47:16 +0100
 Salvatore Orlando sorla...@nicira.com
 mailto:sorla...@nicira.com wrote:

  Aloha guardians of the API!
 
  I haven recently* reviewed a spec for neutron [1] proposing a
  distinct URI for returning resource count on list operations.
  This proposal is for selected neutron resources, but I believe the
  topic is general enough to require a guideline for the API working
  group. Your advice is therefore extremely valuable.
 
  In a nutshell the proposal is to retrieve resource count in the
  following way:
  GET /prefix/resource_name/count
 
  In my limited experience with RESTful APIs, I've never encountered
  one that does counting in this way. This obviously does not mean
 it's
  a bad idea. I think it's not great from a usability perspective to
  require two distinct URIs to fetch the first page and then the total
  number of elements. I reckon the first response page for a list
  operation might include also the total count. For example:
 
  {'resources': [{meh}, {meh}, {meh_again}],
   'resource_count': 55
   link_to_next_page}
 
  I am however completely open to consider other alternatives.
  What is your opinion on this matter?

 FWIW there is a nova spec proposed for counting resources as
 well (I think it might have been previously approved for Juno).

 https://review.openstack.org/#/c/134279/

 I haven't compared the two, but I can't think of a reason we'd
 need to be any different between projects here.

 Regards,

 Chris

 
  Regards,
  Salvatore
 
 
  * it's been 10 days now
 
  [1] https://review.openstack.org/#/c/102199/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list

[openstack-dev] [Congress] Reactive enforcement specs

2014-11-20 Thread Tim Hinrichs
Hi all,

Recently there’s been quite a bit of interest in adding reactive enforcement to 
Congress: the ability to write policies that tell Congress to execute actions 
to correct policy violations.  We’re planning to add this feature in the next 
release.  I wrote a few specs that split this work into several bite-sized 
pieces (one was accidentally merged prematurely—it’s still up for discussion).

Let’s discuss these over Gerrit (the usual spec process).  We’re trying to 
finalize these specs by the middle of next week (a little behind the usual 
OpenStack schedule).  For those of you who haven’t left comments via Gerrit, 
you need to ...

1) log in to Gerrit using your Launchpad ID,
2) leave comments on specific lines in individual files by double-clicking the 
line you’d like to comment on,
3) click the Review button on the initial page
4) click the Publish Comments button.

Add triggers to policy engine (a programmatic interface useful for implementing 
reactive enforcement)
https://review.openstack.org/#/c/130010/

Add modal operators to policy language (how we might express reactive 
enforcement policies within Datalog)
https://review.openstack.org/#/c/134376/

Action-execution interface (how we might modify data-source drivers so they can 
execute actions)
https://review.openstack.org/#/c/134417/

Explicit reactive enforcement (pulling all the pieces together)
https://review.openstack.org/#/c/134418/


There are a number of additional specs generated since the summit.  Feel free 
to chime in on those too.
https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-20 Thread Thomas Goirand
On 11/17/2014 06:54 PM, Radomir Dopieralski wrote:
 - A tool, probably a script, that would help packaging the Bower
 packages into DEB/RPM packages. I suspect the Debian/Fedora packagers
 already have a semi-automatic solution for that.

Nop. Bower isn't even packaged in Debian. Though I may try to do it
(when I'm done with other Mirantis stuff like packaging Fuel for Debian...).

On 11/18/2014 07:59 AM, Richard Jones wrote:
 I was envisaging us creating a tool which generates xstatic packages
 from bower packages. I'm not the first to think along these
 lines
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031042.html

I think that's a very good idea!

 I wrote the tool today, and you can find it here:

 https://github.com/r1chardj0n3s/flaming-shame

AWESOME ! :)
Then now, everyone is happy. Thank you.

On 11/18/2014 04:22 PM, Radomir Dopieralski wrote:
 If we use Bower, we don't need to use Xstatic. It would be pure
 overhead. Bower already takes care of tracking releases and versions,
 and of bundling the files. All we need is a simple line in the
 settings.py telling Django where it puts all the files -- we don't
 really need Xstatic just for that. The packagers can then take those
 Bower packages and turn them into system packages, and just add/change
 the paths in settings.py to where they put the files. All in one
 place.

The issue is that there's often not just a single path, but a full
directory structure to address. That is easily managed with a Debian
xstatic package, not sure how it would be with Bower.

On 11/18/2014 06:36 PM, Richard Jones wrote:
 I guess I got the message that turning bower packages into system
 packages was something that the Linux packagers were not keen on.

What I'm not a fan of, is that we'll have external dependencies being
bumped all the time, with unexpected consequences. At least, with
xstatic packages, we control what's going on (though I understand the
overhead work problem).

By the way, I went into bower.io as I wanted to have a look. How do I
download a binary package for let's say jasmin? When searching, it just
links to github...

On 11/19/2014 12:14 AM, Radomir Dopieralski wrote:
 We would replace that with:

 STATICFILES_DIRS = [
 ('horizon/lib/angular',
os.path.join(BASE_DIR, 'bower_modules/angular'),
 ...
 ]

This would only work if upstream package directory structure is the same
as the one in the distribution. For historical reason, that's
unfortunately often not the case (sometimes because we want to keep
backward compatibility in the distro because of reverse dependencies),
and just changing the path wont make it work.

On 11/19/2014 03:43 AM, Richard Jones wrote:
 +1 to all that, except I'd recommend using django-bower to handle the
 static collection stuff. It's not documented but django-bower has a
 setting BOWER_COMPONENTS_ROOT which would make the above transition much
 simpler. You leave it alone for local dev, and packagers setup
 BOWER_COMPONENTS_ROOT to '/usr/lib/javascript/' or wherever.

s/lib/share/

However, I'm almost sure that wont be enough to make it work. For
example, in Debian, we have /usr/share/javascript/angular.js, not just
/usr/share/javascript/angular. So django-bower would be searching on the
wrong path.

On 11/19/2014 12:25 PM, Richard Jones wrote:
 In their view, bower components don't need to be in global-requirements:

 - there are no other projects that use bower components, so we don't
 need to ensure cross-project compatibility
 - we can vet new versions of bower components as part of standard
 Horizon change review

Maybe that's right for the OpenStack project, but that is a problem at
least for me, as I wont be constantly looking at Horizon dependencies,
just at the global-requirements.txt. So I'm afraid I may miss some new
stuff, and miss the deadline for the next release if I don't pay
attention to it. :(
Anyway, that's not so bad, I can try to adapt, but I just wanted to
raise my concern so that everyone knows about it.

Last thing: I'm currently a bit afraid of what will happen, as I don't
know the tools (bower and such). I wish I had a bit more time to test
them out, but I don't... :( So, I'm just raising my concerns even if
sometimes they may have no root, in the hope that we all find the best
solution that fits everyone. I hope the way I did in this thread is ok.

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Chris K
Thank you for this Ruby,

I agree with what Lucas stated in his reply, Though I thought I would toss
my two cents in to the pool as well.

I also feel that we should (and have been) strive (ing) to maintain
compatibility. Though I feel this is more important on a release to release
basis, more so then on an inter cycle basis. I feel this way because with
in a cycle a new feature may be introduced that has an unforeseen impact on
the current code that needs to be addressed with in that cycle. As a
Operator I would refer to this a leading edge vs bleeding edge. This is
also one of the reasons we cut stable releases, so that folks who what to
have stability in the code they run in production are not having to deal
the day to day upkeep of what landed trunk (master branch) last night that
broke my production environment. Trunk is almost by default not a stable
playground, If we add a new shinny and then find out that it has an
unforeseen impact, changing the way the new feature is implemented is not
such a bad thing, as long as it is still with in the cycle it was
introduced and an official release has not been cut, I am excluding RC
releases as they are Release Candidates and finding such impacts is there
job.

As this is only my option, actual cash value is less then .02 cents.


Chris Krelle
NobodyCam

On Thu, Nov 20, 2014 at 8:28 AM, Lucas Alvares Gomes lucasago...@gmail.com
wrote:

 Hi Ruby,

 Thank you for putting this up.

 I'm one of the ones think we should try hard (even really hard) to
 maintain the compatibility on every commit. I understand that it may
 sound naive because I'm sure that sometimes we will break things, but
 that doesn't means we shouldn't try.

 There may be people running Ironic in a continuous deployment
 environment, those are the users of the project and therefor the most
 important part of Ironic. Doesn't matter how well written Ironic code
 may be if nobody is using it. If we break that user workflow and he's
 unhappy that's the ultimate failure.

 I also understand that in the project POV we want to have fast
 interactions and shiny new features as quick as possible and trying to
 be backward compatibility all the time - on every commit - might slow
 that down. But in the user POV I believe that he doesn't care much
 about all the new features, he would mostly care about the things that
 used to work to continue to work for him.

 Also the backwards approach between releases and not commits might
 work fine in the non-opensource world where the code is kept indoors
 until the software is release, but in the opensource world where the
 code is out to people to use it all the time it doesn't seem to work
 that well.

 That's my 2 cents.

 Lucas

 On Thu, Nov 20, 2014 at 3:38 PM, Ruby Loo rlooya...@gmail.com wrote:
  Hi, we had an interesting discussion on IRC about whether or not we
 should
  be maintaining backwards compatibility within a release cycle. In this
  particular case, we introduced a new decorator in this kilo cycle, and
 were
  discussing the renaming of it, and whether it needed to be backwards
  compatible to not break any out-of-tree driver using master.
 
  Some of us (ok, me or I) think it doesn't make sense to make sure that
  everything we do is backwards compatible. Others disagree and think we
  should, or at least strive for 'must be' backwards compatible with the
  caveat that there will be cases where this isn't
 feasible/possible/whatever.
  (I hope I captured that correctly.)
 
  Although I can see the merit (well, sort of) of trying our best, trying
  doesn't mean 'must', and if it is 'must', who decides what can be
 exempted
  from this, and how will we communicate what is exempted, etc?
 
  Thoughts?
 
  --ruby
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-20 Thread Dean Troyer
On Thu, Nov 20, 2014 at 9:30 AM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 I think this is not very reasonable because Fuel does not provide any
 service in OpenStack but instead allows to set up and manage OpenStack
 clusters.


Ah, well that is different.  FWIW, I have done this with 'other' cloud
manager APIs, some that used Keystone auth and some that didn't.  Either
way, if there are useful patterns, feel free to steal them...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] [congress] Protocol for Congress -- Enactor

2014-11-20 Thread Tim Hinrichs
Thanks for the summary Greg—that was great!  Here’s my take.

It would be great if all the services Congress interacts with implemented the 
same protocol and used the same policy/data language.  It is worth our time to 
figure out what that protocol and language should be.

But we should not forget that there will always be legacy services that people 
are unwilling or unable to change that don’t speak that protocol/language. And 
right now no services speak that protocol/language (since it doesn’t exist).  
So it’s useful today and in the future to have an adapter/wrapper framework 
that enables Congress to  interact with other protocols and languages.

That means we need to push on 2 fronts: (i) designing the ideal 
protocol/language and (ii) designing the adapter framework.  I’ve been focused 
on (ii) since it’s absolutely necessary today, but if anyone would like to 
spearhead (i) I’d be happy to help.

Tim


On Nov 1, 2014, at 11:13 AM, Gregory Lebovitz 
gregory.i...@gmail.commailto:gregory.i...@gmail.com wrote:


Summary from IRC chat 10/14/2014 on weekly meeting [1] [2]

Topic:  Declarative Language for Congress — Enactor/Enforcer

Question: Shall we specify a declarative language for communicating policy 
configured in Congress to enactors / enforcement systems

Hypothesis (derived at conclusion of discussion):
 - Specify declarative protocol and framework for describing policy with 
extensible attributes/value fields described in a base ontology, with 
additional affinity ontologies, is what is needed earlier than later, to be 
able to achieve it as an end-state, before too many Enactors dive into one-offs.
 - We could achieve that specification once we know the right structure


Discussion:

  *   Given the following framework:
 *   Elements:
*   Congress - The policy description point, a place where:
   *   (a) policy inputs are collected
   *   (b) collected policy inputs are integrated
   *   (c) policy is defined
   *   (d) declares policy intent to enforcing / enacting systems
   *   (e) observes state of environment, noting policy violations
*   Feeders - provides policy inputs to Congress
*   Enactors / Enforcers - receives policy declarations from Congress 
and enacts / enforces the policy according to its capabilities
   *   E.g. Nova for VM placement, Neutron for interface connectivity, 
FWaaS for access control, etc.

What will the protocol be for the Congress — Enactors / Enforcers?


thinrichs:  we’ve we've been assuming that Congress will leverage whatever the 
Enactors (policy engines) and Feeders (and more generally datacenter services) 
that exist are using. For basic datacenter services, we had planned on teaching 
Congress what their API is and what it does. So there's no new protocol 
there—we'd just use HTTP or whatever the service expects. For Enactors, there 
are 2 pieces: (1) what policy does Congress push and (2) what protocol does it 
use to do that? We don't know the answer to (1) yet.  (2) is less important, I 
think. For (2) we could use opflex, for example, or create a new one. (1) is 
hard because the Enactors likely have different languages that they understand. 
I’m not aware of anyone thinking about (2). I’m not thinking about (2) b/c I 
don't know the answer to (1). The *really* hard thing to understand IMO is how 
these Enactors should cooperate (in terms of the information they exchange and 
the functionality they provide).  The bits they use to wrap the messages they 
send while cooperating is a lower-level question.


jasonsb  glebo: feel the need to clarify (2)


glebo: if we come out strongly with a framework spec that identifies a protocol 
for (2), and make it clear that Congress participants, including several data 
center Feeders and Enactors, are in consensus, then the other Feeders  
Enactors will line up, in order to be useful in the modern deployments. Either 
that, or they will remain isolated from the new environment, or their customers 
will have to create custom connectors to the new environment. It seems that we 
have 2 options. (a) Congress learns any language spoken by Feeders and 
Enactors, or (b) specifies a single protocol for Congress — Enactors policy 
declarations, including a highly adaptable public registry(ies) for defining 
the meaning of content blobs in those messages. For (a) Congress would get VERY 
bloated with an abstraction layer, modules, semantics and state for each 
different language it needed to speak. And there would be 10s of these 
languages. For (b), there would be one way to structure messages that were 
constructed of blobs in (e.g.) some sort of Type/Length/Value (TLV) method, 
where the Types and Values were specified in some Internet registry.


jasonsb: Could we attack this from the opposite direction? E.g. if Congress 
wanted to provide an operational dashboard to show if things are in compliance, 
it would be better served 

Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Dmitry Tantsur

On 11/20/2014 04:38 PM, Ruby Loo wrote:

Hi, we had an interesting discussion on IRC about whether or not we
should be maintaining backwards compatibility within a release cycle. In
this particular case, we introduced a new decorator in this kilo cycle,
and were discussing the renaming of it, and whether it needed to be
backwards compatible to not break any out-of-tree driver using master.

Some of us (ok, me or I) think it doesn't make sense to make sure that
everything we do is backwards compatible. Others disagree and think we
should, or at least strive for 'must be' backwards compatible with the
caveat that there will be cases where this isn't
feasible/possible/whatever. (I hope I captured that correctly.)

Although I can see the merit (well, sort of) of trying our best, trying
doesn't mean 'must', and if it is 'must', who decides what can be
exempted from this, and how will we communicate what is exempted, etc?
It make sense to try to preserve compatibility, especially for things 
that landed some time ago. For newly invented things, like the decorator 
it makes no sense to me, however.


People consuming master have to be prepared. That does not mean that we 
should break them every week obviously, but still. That's why we have 
releases: to promise stability to people. By consuming master you agree 
that things might break rarely.


Thoughts?

--ruby


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Devananda van der Veen
Let's get concrete for a moment, because it makes a difference which API
we're talking about.

We have to guarantee a fairly high degree of backwards compatibility within
the REST API. Adding new capabilities, and exposing them in a discoverable
way, is fine; a backwards-incompatible breaking change to the REST API is
definitely not OK without a version bump. We should (and do) make a strong
effort not to land any REST API change without appreciable thought and
testing of its impact. Changes here have an immediate effect on anyone
following trunk.

The RPC API is another area of compatibility, and perhaps the one most
clearly versioned today. We must continue supporting running disparate
versions of the RPC client and server (that is, rpcapi.py and manager.py)
so that operators can upgrade the API and Conductor services
asymmetrically. Changes to the RPC API are done in such a way that each
service can be upgraded independently of other services.

The driver API is the only purely-python API we support -- and we know that
there are downstream consumers of that API. OnMetal is one such; many other
spoke up at the recent summit. While the impact of a breaking change here
is less than in the REST API, it is not to be overlooked. There is a cost
associated with maintaining an out-of-tree driver and we should make a
best-effort to minimize that cost for folks who (for what ever reason) are
in that boat.

-Devananda


On Thu Nov 20 2014 at 8:28:56 AM Lucas Alvares Gomes lucasago...@gmail.com
wrote:

 Hi Ruby,

 Thank you for putting this up.

 I'm one of the ones think we should try hard (even really hard) to
 maintain the compatibility on every commit. I understand that it may
 sound naive because I'm sure that sometimes we will break things, but
 that doesn't means we shouldn't try.

 There may be people running Ironic in a continuous deployment
 environment, those are the users of the project and therefor the most
 important part of Ironic. Doesn't matter how well written Ironic code
 may be if nobody is using it. If we break that user workflow and he's
 unhappy that's the ultimate failure.

 I also understand that in the project POV we want to have fast
 interactions and shiny new features as quick as possible and trying to
 be backward compatibility all the time - on every commit - might slow
 that down. But in the user POV I believe that he doesn't care much
 about all the new features, he would mostly care about the things that
 used to work to continue to work for him.

 Also the backwards approach between releases and not commits might
 work fine in the non-opensource world where the code is kept indoors
 until the software is release, but in the opensource world where the
 code is out to people to use it all the time it doesn't seem to work
 that well.

 That's my 2 cents.

 Lucas

 On Thu, Nov 20, 2014 at 3:38 PM, Ruby Loo rlooya...@gmail.com wrote:
  Hi, we had an interesting discussion on IRC about whether or not we
 should
  be maintaining backwards compatibility within a release cycle. In this
  particular case, we introduced a new decorator in this kilo cycle, and
 were
  discussing the renaming of it, and whether it needed to be backwards
  compatible to not break any out-of-tree driver using master.
 
  Some of us (ok, me or I) think it doesn't make sense to make sure that
  everything we do is backwards compatible. Others disagree and think we
  should, or at least strive for 'must be' backwards compatible with the
  caveat that there will be cases where this isn't
 feasible/possible/whatever.
  (I hope I captured that correctly.)
 
  Although I can see the merit (well, sort of) of trying our best, trying
  doesn't mean 'must', and if it is 'must', who decides what can be
 exempted
  from this, and how will we communicate what is exempted, etc?
 
  Thoughts?
 
  --ruby
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Eoghan Glynn

 Hi everyone,
 
 TL;DR:
 I propose we turn the weekly project/release meeting timeslot
 (Tuesdays at 21:00 UTC) into a weekly cross-project meeting, to
 discuss cross-project topics with all the project leadership, rather
 than keep it release-specific.
 
 
 Long version:
 
 Since the dawn of time (August 28, 2010 !), there has always been a
 project meeting on Tuesdays at 21:00 UTC. It used to be a all-hands
 meeting, then it turned more into a release management meeting. With the
 addition of more integrated projects, all the meeting time was spent in
 release status updates and there was no time to discuss project-wide
 issues anymore.
 
 During the Juno cycle, we introduced 1:1 sync points[1] for project
 release liaisons (usually PTLs) to synchronize their status with the
 release management team /outside/ of the meeting time. That freed time
 to discuss integrated-release-wide problems and announcements during the
 meeting itself.
 
 Looking back to the Juno meetings[2], it's quite obvious that the
 problems we discussed were not all release-management-related, though,
 and that we had free time left. So I think it would be a good idea in
 Kilo to recognize that and clearly declare that meeting the weekly
 cross-project meeting. There we would discuss release-related issues if
 needed, but also all the others cross-project hot topics of the day on
 which a direct discussion can help making progress.
 
 The agenda would be open (updated directly on the wiki and edited/closed
 by the chair a few hours before the meeting to make sure everyone knows
 what will be discussed). The chair (responsible for vetting/postponing
 agenda points and keeping the discussion on schedule) could rotate.
 
 During the Juno cycle we also introduced the concept of Cross-Project
 Liaisons[3], as a way to scale the PTL duties to a larger group of
 people and let new leaders emerge from our community. Those CPLs would
 be encouraged to participate in the weekly cross-project meeting
 (especially when a topic in their domain expertise is discussed), and
 the meeting would be open to all anyway (as is the current meeting).
 
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)

+1 to involving the liaisons more directly

-1 to the meeting size growing too large for productive real-time
   communication on IRC

IME, there's a practical limit on the number of *active* participants
in an IRC meeting. Not sure what that magic threshold is, but I suspect
not much higher than 25.

So given that we're in an era of fretting about the scalability
challenges facing cross-project concerns, I'd hate to paint ourselves
into a corner with another cross-project scalability challenge.

How about the agenda each week includes a specific invitation to a
subset of the liaisons, based on relevance?

(e.g. the week there's a CI brownout, request all the QA liaisons attend;
 whereas the week that the docs team launch a new contribution workflow,
 request that all the docs liaisons are present).

Possibly with a standing invite to the release-mgmt liaison (or PTL)?

Of course, as you say, the meeting is otherwise open-as-open-can-be.

Cheers,
Eoghan
 
 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
 [2] http://eavesdrop.openstack.org/meetings/project/2014/
 [3] https://wiki.openstack.org/wiki/CrossProjectLiaisons
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Fox, Kevin M
I think it depends totally on if you want trunk to be a distribution mechanism 
or not. If you encourage people to 'just use trunk' for deployment, then you 
better not break out of tree drivers on people. If you have a stable release 
branch, that you tell people to use, there is plenty of forewarning for out of 
tree drivers to update when the new stable is about to be released.

Thanks,
Kevin

From: Lucas Alvares Gomes [lucasago...@gmail.com]
Sent: Thursday, November 20, 2014 8:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] maintaining backwards compatibility 
within a cycle

Hi Ruby,

Thank you for putting this up.

I'm one of the ones think we should try hard (even really hard) to
maintain the compatibility on every commit. I understand that it may
sound naive because I'm sure that sometimes we will break things, but
that doesn't means we shouldn't try.

There may be people running Ironic in a continuous deployment
environment, those are the users of the project and therefor the most
important part of Ironic. Doesn't matter how well written Ironic code
may be if nobody is using it. If we break that user workflow and he's
unhappy that's the ultimate failure.

I also understand that in the project POV we want to have fast
interactions and shiny new features as quick as possible and trying to
be backward compatibility all the time - on every commit - might slow
that down. But in the user POV I believe that he doesn't care much
about all the new features, he would mostly care about the things that
used to work to continue to work for him.

Also the backwards approach between releases and not commits might
work fine in the non-opensource world where the code is kept indoors
until the software is release, but in the opensource world where the
code is out to people to use it all the time it doesn't seem to work
that well.

That's my 2 cents.

Lucas

On Thu, Nov 20, 2014 at 3:38 PM, Ruby Loo rlooya...@gmail.com wrote:
 Hi, we had an interesting discussion on IRC about whether or not we should
 be maintaining backwards compatibility within a release cycle. In this
 particular case, we introduced a new decorator in this kilo cycle, and were
 discussing the renaming of it, and whether it needed to be backwards
 compatible to not break any out-of-tree driver using master.

 Some of us (ok, me or I) think it doesn't make sense to make sure that
 everything we do is backwards compatible. Others disagree and think we
 should, or at least strive for 'must be' backwards compatible with the
 caveat that there will be cases where this isn't feasible/possible/whatever.
 (I hope I captured that correctly.)

 Although I can see the merit (well, sort of) of trying our best, trying
 doesn't mean 'must', and if it is 'must', who decides what can be exempted
 from this, and how will we communicate what is exempted, etc?

 Thoughts?

 --ruby

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-20 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/11/14 18:39, Dan Smith wrote:
 However, it presents a problem when we consider NovaObjects, and 
 dependencies between them.
 
 I disagree with this assertion, because:
 
 For example, take Instance.save(). An Instance has relationships
 with several other object types, one of which is
 InstanceInfoCache. Consider the following code, which is amongst
 what happens in spawn():
 
 instance = Instance.get_by_uuid(uuid) instance.vm_state =
 vm_states.ACTIVE instance.info_cache.network_info = new_nw_info 
 instance.save()
 
 instance.save() does (simplified): self.info_cache.save() 
 self._db_save()
 
 Both of these saves happen in separate db transactions.
 
 This has always been two DB calls, and for a while recently, it was
 two RPCs, each of which did one call.

Well, let's take the opportunity to fix it :)

I'll also point out that although your head may contain everywhere
that Nova touches both info_cache and instance:

1. Not many other peoples' do.
2. Reasoning about the safety of this robustly is still hard.

As I mentioned, this is also just 1 example among many. Others include:

* Flavor.save() makes an unbounded number of db calls in separate
transactions.

* Instance.save() cascades saves to security groups, each of which is
saved in a separate transaction.

We can push these into the db layer, but it is my understanding that
NovaObjects are supposed to manage their own mapping of internal state
- - db representation. If we push this into the db api, we're violating
the separation of concerns. If we're going to do this, we'd better
understand, and be able to articulate, *why* we don't want to allow
NovaObjects to manage a db transaction when required. The pattern I
outlined below is very simple.

 This has at least 2 undesirable effects:
 
 1. A failure can result in an inconsistent database. i.e.
 info_cache having been persisted, but instance.vm_state not
 having been persisted.
 
 2. Even in the absence of a failure, an external reader can see
 the new info_cache but the old instance.
 
 I think you might want to pick a different example. We update the 
 info_cache all the time asynchronously, due to time has passed
 and other non-user-visible reasons.

Ok, pick one of the others above.

 New features continue to add to the problem, including numa
 topology and pci requests.
 
 NUMA and PCI information are now created atomically with the
 instance (or at least, passed to SQLA in a way I expect does the
 insert as a single transaction). We don't yet do that in save(), I
 think because we didn't actually change this information after
 creation until recently.
 
 Definitely agree that we should not save the PCI part without the
 base instance part.

How are we going to achieve that?

 I don't think we can reasonably remove the cascading save() above
 due to the deliberate design of objects. Objects don't correspond
 directly to their datamodels, so save() does more work than just
 calling out to the DB. We need a way to allow cascading object
 saves to happen within a single DB transaction. This will mean:
 
 1. A change will be persisted either entirely or not at all in
 the event of a failure.
 
 2. A reader will see either the whole change or none of it.
 
 This is definitely what we should strive for in cases where the
 updates are related, but as I said above, for things (like info
 cache) where it doesn't matter, we should be fine.

I'm proposing a pattern which is always safe and is simple to reason
about. I would implement it everywhere. I don't think there are any
downsides.

 Note that there is this recently approved oslo.db spec to make 
 transactions more manageable:
 
 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm


 
Again, while this will be a significant benefit to the DB api, it will
 not solve the problem of cascading object saves without allowing 
 transaction management at the level of NovaObject.save(): we need
 to allow something to call a db api with an existing session, and
 we need to allow something to pass an existing db transaction to
 NovaObject.save().
 
 I don't agree that we need to be concerned about this at the 
 NovaObject.save() level. I do agree that Instance.save() needs to
 have a relationship to its sub-objects that facilitates atomicity
 (where appropriate), and that such a pattern can be used for other
 such hierarchies.
 
 An obvious precursor to that is removing N309 from hacking,
 which specifically tests for db apis which accept a session
 argument. We then need to consider how NovaObject.save() should
 manage and propagate db transactions.
 
 Right, so I believe that we had more consistent handling of
 transactions in the past. We had a mechanism for passing around the
 session between chained db/api methods to ensure they happened
 atomically. I think Boris led the charge to eliminate that,
 culminating with the hacking rule you mentioned.
 
 Maybe getting 

[openstack-dev] [TripleO] Stepping away

2014-11-20 Thread Alexis Lee
Hiya,

I'm going to step away from TripleO for a while to refocus on Nova. No
reflection on TripleO, this is to align with my local team. It's been a
pleasure working with all of you, I hope I've had some kind of positive
impact and I'm sure we'll still bump into one another now and then.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Kyle Mestery
On Thu, Nov 20, 2014 at 9:25 AM, Anita Kuno ante...@anteaya.info wrote:
 On 11/20/2014 09:04 AM, Russell Bryant wrote:
 On 11/20/2014 08:55 AM, Thierry Carrez wrote:
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)

 +1 :-)

 I too think this is a good idea.

+1 from me as well, this will be a positive change.

Thanks,
Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-20 Thread Clay Gerrard
You might check if the swift-recon tool has the data you're looking for.
It can report the last completed replication pass time across nodes in the
ring.

On Thu, Nov 20, 2014 at 1:28 AM, Matsuda, Kenichiro 
matsuda_keni...@jp.fujitsu.com wrote:

 Hi,

 I would like to know about a way of checking replicate completion on swift
 cluster.
 (e.g. after rebalanced Ring)

 I found the way of using swift-dispersion-report from Administrator's
 Guide.
 But, this way is not enough, because swift-dispersion-report can't checking
 replicate completion for other data that made by not
 swift-dispersion-populate.

 And also, I found the way of using replicator's logs from QA.
 But, I would like to more easy way, because check of below logs is very
 heavy.

   (account/container/object)-replicator * All storage node on swift cluster

 Could you please advise me for it?

 Findings:
   Administrator's Guide  Cluster Health

 http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
   how to check replicator work complete

 https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/

 Best Regards,
 Kenichiro Matsuda.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-20 Thread Dan Smith
 * Flavor.save() makes an unbounded number of db calls in separate
 transactions.

This is actually part of the design of the original flavors public API.
Since we can add and remove projects/specs individually, we avoid ending
up with just one or the other group of values, for competing requests.

But as I said before, your point is entirely valid for the cases where
it's relevant.

 * Instance.save() cascades saves to security groups, each of which is
 saved in a separate transaction.
 
 We can push these into the db layer, but it is my understanding that
 NovaObjects are supposed to manage their own mapping of internal state
 - db representation. If we push this into the db api, we're violating
 the separation of concerns. If we're going to do this, we'd better
 understand, and be able to articulate, *why* we don't want to allow
 NovaObjects to manage a db transaction when required. The pattern I
 outlined below is very simple.

The DB API isn't a stable interface, and exists purely to do what we
need it to do. So, if pushing the above guarantees into the DB API works
for the moment, we can do that. If we needed to change something about
it in the future, we could. If you look at it right now, it has some
very (very) specific queries and updates that serve very directed
purposes. I don't see pushing atomicity expectations into the DB API as
a problem.

 I'm proposing a pattern which is always safe and is simple to reason
 about. I would implement it everywhere. I don't think there are any
 downsides.

Cool, sounds like a spec. I'd say propose it and we'll see where it goes
in review.

Thanks!

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-20 Thread Doug Hellmann

On Nov 20, 2014, at 8:12 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 Hey y'all,
 
 To avoid cross-posting, please inform your -infra / -operations buddies about 
 this post. 
 
 We've just started thinking about where notification schema files should live 
 and how they should be deployed. Kind of a tricky problem.  We could really 
 use your input on this problem ...
 
 The assumptions:
 1. Schema files will be text files. They'll live in their own git repo 
 (stackforge for now, ideally oslo eventually). 

Why wouldn’t they live in the repo of the application that generates the 
notification, like we do with the database schema and APIs defined by those 
apps?

 2. Unit tests will need access to these files for local dev
 3. Gating tests will need access to these files for integration tests
 4. Many different services are going to want to access these files during 
 staging and production. 
 5. There are going to be many different versions of these files. There are 
 going to be a lot of schema updates. 
 
 Some problems / options:
 a. Unlike Python, there is no simple pip install for text files. No version 
 control per se. Basically whatever we pull from the repo. The problem with a 
 git clone is we need to tweak config files to point to a directory and that's 
 a pain for gating tests and CD. Could we assume a symlink to some well-known 
 location?
a': I suppose we could make a python installer for them, but that's a pain 
 for other language consumers.
 b. In production, each openstack service could expose the schema files via 
 their REST API, but that doesn't help gating tests or unit tests. Also, this 
 means every service will need to support exposing schema files. Big 
 coordination problem.
 c. In production, We could add an endpoint to the Keystone Service Catalog to 
 each schema file. This could come from a separate metadata-like service. 
 Again, yet-another-service to deploy and make highly available. 
 d. Should we make separate distro packages? Install to a well known location 
 all the time? This would work for local dev and integration testing and we 
 could fall back on B and C for production distribution. Of course, this will 
 likely require people to add a new distro repo. Is that a concern?
 
 Personally, I'm leaning towards option D but I'm not sure what the 
 implications are. 
 
 We're early in thinking about these problems, but would like to start the 
 conversation now to get your opinions. 
 
 Look forward to your feedback.
 
 Thanks
 -Sandy
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Dmitriy Shulyak
Hi folks,

There was interesting research today on random nics ordering for nodes in
bootstrap stage. And in my opinion it requires separate thread...
I will try to describe what the problem is and several ways to solve it.
Maybe i am missing the simple way, if you see it - please participate.
Link to LP bug: https://bugs.launchpad.net/fuel/+bug/1394466

When a node is booted first time it registers its interfaces in nailgun,
see sample of data (only related to discussion parts):
- name: eth0
  ip: 10.0.0.3/24
  mac: 00:00:03
- name: eth1
  ip: None
  mac: 00:00:04
* eth0 is admin network interface which was used for initial pxe boot

We have networks, for simplicity lets assume there is 2:
 - admin
 - public
When the node is added to cluster, in general you will see next schema:
- name: eth0
  ip: 10.0.0.3/24
  mac: 00:00:03
  networks:
- admin
- public
- name: eth1
  ip: None
  mac: 00:00:04

At this stage node is still using default system with bootstrap profile, so
there is no custom system with udev rules. And on next reboot there is no
way to guarantee that network cards will be discovered by kernel in same
order. If network cards is discovered in order that is diffrent from
original and nics configuration is updated, it is possible to end up with:
- name: eth0
  ip: None
  mac: 00:00:04
  networks:
- admin
- public
- name: eth1
  mac: 00:00:03
  ip: 10.0.0.3/24
Here you can see that networks is left connected to eth0 (in db). And
ofcourse this schema doesnt reflect physical infrastructure. I hope it is
clear now what is the problem.
If you want to investigate it yourself, please find db dump in snapshot
attached to the bug, you will be able to find described here case.
What happens next:
1. netcfg/choose_interface for kernel is misconfigured, and in my example
it will be 00:00:04, but should be 00:00:03
2. network configuration for l23network will be simply corrupted

So - possible solutions:
1. Reflect node interfaces ordering, with networks reassignment - Hard and
hackish
2. Do not update any interfaces info if networks assigned to them, then
udev rules will be applied and nics will be reordered into original state -
i would say easy and reliable solution
3. Create cobbler system when node is booted first time, and add udev rules
- it looks to me like proper solution, but requires design

Please share your thoughts/ideas, afaik this issue is not rare on scale
deployments.
Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Steven Kaufer
The neturon spec appears to be a copy/paste of the nova spec that I wrote.
Based on the conversation below, I agree that this is not the best
approach: GET /prefix/resource_name/count

I'll get started on updating the nova spec to include the total count value
in some new attribute based on the existence of a query parameter (ie,
include_count=1).

The details will have to be in limbo a bit until this gets resolved:
https://review.openstack.org/#/c/133660/

Thanks,
Steven Kaufer


Sean Dague s...@dague.net wrote on 11/20/2014 10:48:05 AM:

 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Date: 11/20/2014 10:57 AM
 Subject: Re: [openstack-dev] [api] Counting resources

 I'm looking at the Nova spec, and it seems very taylored to a specific
 GUI. I'm also not sure that 17128 errors is more useful than 500+ errors
 when presenting to the user (the following in my twitter stream made me
 think about that this morning -
 https://twitter.com/NINK/status/535299029383380992)

 500+ also better describes the significant figures we're talking about
here.

-Sean

 On 11/20/2014 11:28 AM, Morgan Fainberg wrote:
  The only thing I want to caution against is making a SQL-specific
  choice. In the case of some other backends, it may not be possible (for
  an extremely large dataset) to get a full count, where SQL does this
  fairly elegantly. For example, LDAP (in some cases) may have an
  administrative limit that will say that no more than 10,000 entries
  would be returned; likely you’re going to have an issue, since you need
  to issue the query and see how many things match, if you hit the
overall
  limit you’ll get the same count every time (but possibly a different
  dataset).
 
  I want to be very careful that we’re not recommending functionality as
a
  baseline that should be used as a pattern across all similar APIs,
  especially since we have some backends/storage systems that can’t
  elegantly always support it.
 
  Personally, I like Gerrit’s model (as Sean described) - with the above
  caveat that not all backends support this type of count.
 
  Cheers,
  Morgan
 
  On Nov 20, 2014, at 8:04 AM, Salvatore Orlando sorla...@nicira.com
  mailto:sorla...@nicira.com wrote:
 
  The Nova proposal appears to be identical to neutron's, at least from
  a consumer perspective.
 
  If I were to pick a winner, I'd follow Sean's advice regarding the
  'more' attribute in responses, and put the total number of resources
  there; I would also take Jay's advice of including the total only if
  requested with a query param. In this way a user can retrieve the
  total number of items regardless of the current pagination index (in
  my first post I suggested the total number should be returned only on
  the first page of results).
 
  Therefore one could ask for a total number of resources with something
  like the following:
 
  GET /some_resources?include_total=1
 
  and obtain a response like the following:
 
  {'resources': [{meh}, {meh}, {meh_again}],
'something': {
 '_links': {'prev': ..., 'next': ...},
 'total': agazillion}
   }
 
   where the exact structure and naming of 'something' depends on the
  outcome of the discussion at [1]
 
  Salvatore
 
  [1] https://review.openstack.org/#/c/133660/7/guidelines/
 representation_structure.rst
 
  On 20 November 2014 15:24, Christopher Yeoh cbky...@gmail.com
  mailto:cbky...@gmail.com wrote:
 
  On Thu, 20 Nov 2014 14:47:16 +0100
  Salvatore Orlando sorla...@nicira.com
  mailto:sorla...@nicira.com wrote:
 
   Aloha guardians of the API!
  
   I haven recently* reviewed a spec for neutron [1] proposing a
   distinct URI for returning resource count on list operations.
   This proposal is for selected neutron resources, but I believe
the
   topic is general enough to require a guideline for the API
working
   group. Your advice is therefore extremely valuable.
  
   In a nutshell the proposal is to retrieve resource count in the
   following way:
   GET /prefix/resource_name/count
  
   In my limited experience with RESTful APIs, I've never
encountered
   one that does counting in this way. This obviously does not mean
  it's
   a bad idea. I think it's not great from a usability perspective
to
   require two distinct URIs to fetch the first page and then the
total
   number of elements. I reckon the first response page for a list
   operation might include also the total count. For example:
  
   {'resources': [{meh}, {meh}, {meh_again}],
'resource_count': 55
link_to_next_page}
  
   I am however completely open to consider other alternatives.
   What is your opinion on this matter?
 
  FWIW there is a nova spec proposed for counting resources as
  well (I think it might have been previously approved for Juno).
 
  https://review.openstack.org/#/c/134279/
 
  I haven't compared the two, but I 

Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Doug Hellmann

On Nov 20, 2014, at 12:19 PM, Eoghan Glynn egl...@redhat.com wrote:

 
 Hi everyone,
 
 TL;DR:
 I propose we turn the weekly project/release meeting timeslot
 (Tuesdays at 21:00 UTC) into a weekly cross-project meeting, to
 discuss cross-project topics with all the project leadership, rather
 than keep it release-specific.
 
 
 Long version:
 
 Since the dawn of time (August 28, 2010 !), there has always been a
 project meeting on Tuesdays at 21:00 UTC. It used to be a all-hands
 meeting, then it turned more into a release management meeting. With the
 addition of more integrated projects, all the meeting time was spent in
 release status updates and there was no time to discuss project-wide
 issues anymore.
 
 During the Juno cycle, we introduced 1:1 sync points[1] for project
 release liaisons (usually PTLs) to synchronize their status with the
 release management team /outside/ of the meeting time. That freed time
 to discuss integrated-release-wide problems and announcements during the
 meeting itself.
 
 Looking back to the Juno meetings[2], it's quite obvious that the
 problems we discussed were not all release-management-related, though,
 and that we had free time left. So I think it would be a good idea in
 Kilo to recognize that and clearly declare that meeting the weekly
 cross-project meeting. There we would discuss release-related issues if
 needed, but also all the others cross-project hot topics of the day on
 which a direct discussion can help making progress.
 
 The agenda would be open (updated directly on the wiki and edited/closed
 by the chair a few hours before the meeting to make sure everyone knows
 what will be discussed). The chair (responsible for vetting/postponing
 agenda points and keeping the discussion on schedule) could rotate.
 
 During the Juno cycle we also introduced the concept of Cross-Project
 Liaisons[3], as a way to scale the PTL duties to a larger group of
 people and let new leaders emerge from our community. Those CPLs would
 be encouraged to participate in the weekly cross-project meeting
 (especially when a topic in their domain expertise is discussed), and
 the meeting would be open to all anyway (as is the current meeting).
 
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)
 
 +1 to involving the liaisons more directly
 
 -1 to the meeting size growing too large for productive real-time
   communication on IRC
 
 IME, there's a practical limit on the number of *active* participants
 in an IRC meeting. Not sure what that magic threshold is, but I suspect
 not much higher than 25.
 
 So given that we're in an era of fretting about the scalability
 challenges facing cross-project concerns, I'd hate to paint ourselves
 into a corner with another cross-project scalability challenge.
 
 How about the agenda each week includes a specific invitation to a
 subset of the liaisons, based on relevance?

The focused teams each already have (or can have) separate meetings. In Oslo 
meetings, for example, we spent time every week coordinating between our 
liaisons and core team. AIUI, the point of Thierry’s proposal is to bring more 
people interested in cross-project issues together for cross-pollination and 
brainstorming for issues that we can afford to wait for a meeting to discuss 
(vs. outages). 

Doug

 
 (e.g. the week there's a CI brownout, request all the QA liaisons attend;
 whereas the week that the docs team launch a new contribution workflow,
 request that all the docs liaisons are present).
 
 Possibly with a standing invite to the release-mgmt liaison (or PTL)?
 
 Of course, as you say, the meeting is otherwise open-as-open-can-be.
 
 Cheers,
 Eoghan
 
 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
 [2] http://eavesdrop.openstack.org/meetings/project/2014/
 [3] https://wiki.openstack.org/wiki/CrossProjectLiaisons
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Dmitriy Shulyak
Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our small
monitoring applications..
Also something well designed and extendable can be reused for statistic
collector


1. https://github.com/sensu

On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  I didn't mean a robust monitoring system, just something simpler.
 Notifications is a good idea for FuelWeb.

 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Nov 20 1800 UTC

2014-11-20 Thread Andrew Lazarev
Minutes: 
*http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html*
Logs: 
*http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html*

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.
Hi Sukhdev,

Hope you enjoyed Europe ;)

On 19 November 2014 17:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 Folks,

 Like Ian, I am jumping in this very late as well - as I decided to travel
 Europe after the summit, just returned back and  catching up :-):-)

 I have noticed that this thread has gotten fairly convoluted and painful
 to read.

 I think Armando summed it up well in the beginning of the thread. There
 are basically three written proposals (listed in Armando's email - I pasted
 them again here).

 [1] https://review.openstack.org/#/c/134179/
 [2] https://review.openstack.org/#/c/100278/
 [3] https://review.openstack.org/#/c/93613/

 On this thread I see that the authors of first two proposals have already
 agreed to consolidate and work together. This leaves with two proposals.
 Both Ian and I were involved with the third proposal [3] and have
 reasonable idea about it. IMO, the use cases addressed by the third
 proposal are very similar to use cases addressed by proposal [1] and [2]. I
 can volunteer to  follow up with Racha and Stephen from Ericsson to see if
 their use case will be covered with the new combined proposal. If yes, we
 have one converged proposal. If no, then we modify the proposal to
 accommodate their use case as well. Regardless, I will ask them to review
 and post their comments on [1].

 Having said that, this covers what we discussed during the morning session
 on Friday in Paris. Now, comes the second part which Ian brought up in the
 afternoon session on Friday.
 My initial reaction was, when heard his use case, that this new
 proposal/API should cover that use case as well (I am being bit optimistic
 here :-)). If not, rather than going into the nitty gritty details of the
 use case, let's see what modification is required to the proposed API to
 accommodate Ian's use case and adjust it accordingly.

 Now, the last point (already brought up by Salvatore as well as Armando) -
 the abstraction of the API, so that it meets the Neutron API criteria. I
 think this is the critical piece. I also believe the API proposed by [1] is
 very close. We should clean it up and take out references to ToR's or
 physical vs virtual devices. The API should work at an abstract level so
 that it can deal with both physical as well virtual devices. If we can
 agree to that, I believe we can have a solid solution.


Yes, I do think that the same API can target both: a 100% software solution
for L2GW as well as one that may want to rely on hardware support, in the
same spirit of any other Neutron API. I made the same point on spec [1].




 Having said that I would like to request the community to review the
 proposal submitted by Maruti in [1] and post comments on the spec with the
 intent to get a closure on the API. I see lots of good comments already on
 the spec. Lets get this done so that we can have a workable (even if not
 perfect) version of API in Kilo cycle. Something which we can all start to
 play with. We can always iterate over it, and make change as we get more
 and more use cases covered.


So far it seems like proposal [1] that has the most momentum. I'd like to
consider [3] as one potential software implementation of the proposed API.
As I mentioned earlier, I'd rather start with a well defined problem, free
of any potential confusion or open to subjective interpretation; a loose
API suffers from both pitfalls, hence my suggestion to go with API proposed
in [1].



 Make sense?

 cheers..
 -Sukhdev


 On Tue, Nov 18, 2014 at 6:44 PM, Armando M. arma...@gmail.com wrote:

 Hi,

 On 18 November 2014 16:22, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Sorry I'm a bit late to this, but that's what you get from being on
 holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
 I swear I'll get to them.)


 Ah! I hope it was good at least :)



 On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi

 On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote:
  Last Friday I recall we had two discussions around this topic. One in
 the
  morning, which I think led to Maruti to push [1]. The way I
 understood [1]
  was that it is an attempt at unifying [2] and [3], by choosing the API
  approach of one and the architectural approach of the other.
 
  [1] https://review.openstack.org/#/c/134179/
  [2] https://review.openstack.org/#/c/100278/
  [3] https://review.openstack.org/#/c/93613/
 
  Then there was another discussion in the afternoon, but I am not 100%
 of the
  outcome.

 Me neither, that's why I'd like ian, who led this discussion, to sum
 up the outcome from its point of view.


 So, the gist of what I said is that we have three, independent, use
 cases:

 - connecting two VMs that like to tag packets to each other (VLAN clean
 networks)
 - connecting many networks to a single VM (trunking ports)
 - connecting the outside world to a set of virtual networks

 We're discussing that last use case here.  The point I was 

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.
On 20 November 2014 02:08, Salvatore Orlando sorla...@nicira.com wrote:



 On 20 November 2014 02:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 Folks,

 Like Ian, I am jumping in this very late as well - as I decided to travel
 Europe after the summit, just returned back and  catching up :-):-)

 I have noticed that this thread has gotten fairly convoluted and painful
 to read.

 I think Armando summed it up well in the beginning of the thread. There
 are basically three written proposals (listed in Armando's email - I pasted
 them again here).

 [1] https://review.openstack.org/#/c/134179/
 [2] https://review.openstack.org/#/c/100278/
 [3] https://review.openstack.org/#/c/93613/


 In this thread I have seen other specs being mentioned as related.
 Namely:
 1) https://review.openstack.org/#/c/93329/ (BGP VPN)
 2) https://review.openstack.org/#/c/101043/ (MPLS vpn)
 3) https://review.openstack.org/#/c/87825/ (external device integration)
 Note that I am not saying they should be put as well in the mix. I'm only
 listing them here as a recap.
 There are probably other ideas not yet put in the form of a concrete
 specification. In order to avoid further confusion, I would just blindly
 ignore proposals which do not exist in the form a specification.


Ah, I know what you're trying to do here...I am not gonna fall into your
trap :)





 On this thread I see that the authors of first two proposals have already
 agreed to consolidate and work together. This leaves with two proposals.
 Both Ian and I were involved with the third proposal [3] and have
 reasonable idea about it. IMO, the use cases addressed by the third
 proposal are very similar to use cases addressed by proposal [1] and [2]. I
 can volunteer to  follow up with Racha and Stephen from Ericsson to see if
 their use case will be covered with the new combined proposal. If yes, we
 have one converged proposal. If no, then we modify the proposal to
 accommodate their use case as well. Regardless, I will ask them to review
 and post their comments on [1].


 One thing that I've noticed in the past is that contributors are led to
 think that the owner of the specification will also be the lead for the
 subsequent work. There nothing farther from truth. Sometimes I write specs
 with the exact intent of having somebody else lead the implementation. So
 don't feel bad to abandon a spec if you realize your use cases can be
 completely included in another specification.


Stating the obvious, but yes point taken!





 Having said that, this covers what we discussed during the morning
 session on Friday in Paris. Now, comes the second part which Ian brought up
 in the afternoon session on Friday.
 My initial reaction was, when heard his use case, that this new
 proposal/API should cover that use case as well (I am being bit optimistic
 here :-)). If not, rather than going into the nitty gritty details of the
 use case, let's see what modification is required to the proposed API to
 accommodate Ian's use case and adjust it accordingly.


 Unfortunately I did not attend that discussion. Possibly 90% of the people
 reading this thread did not attend it. It would be nice if Ian or somebody
 else posted a write-up, adding more details to what has already been shared
 in this thread. If you've already done so please share a link as my
 google-fu is not that good these days.



 Now, the last point (already brought up by Salvatore as well as Armando)
 - the abstraction of the API, so that it meets the Neutron API criteria. I
 think this is the critical piece. I also believe the API proposed by [1] is
 very close. We should clean it up and take out references to ToR's or
 physical vs virtual devices. The API should work at an abstract level so
 that it can deal with both physical as well virtual devices. If we can
 agree to that, I believe we can have a solid solution.


 Having said that I would like to request the community to review the
 proposal submitted by Maruti in [1] and post comments on the spec with the
 intent to get a closure on the API. I see lots of good comments already on
 the spec. Lets get this done so that we can have a workable (even if not
 perfect) version of API in Kilo cycle. Something which we can all start to
 play with. We can always iterate over it, and make change as we get more
 and more use cases covered.


 Iterate is the key here I believe. As long as we pretend to achieve the
 perfect API at the first attempt we'll just keep having this discussion. I
 think the first time a L2 GW API was proposed, it was for Grizzly.
 For instance, it might relatively easy to define an API which can handle
 both physical and virtual devices. The user workflow for a ToR terminated
 L2 GW is different from the workflow for a virtual appliance owned by
 tenant, and this will obviously reflected in the API. On the other hand, a
 BGP VPN might be a completely different use case, and therefore have a
 completely different set of APIs.


+1



 

Re: [openstack-dev] Where should Schema files live?

2014-11-20 Thread Sandy Walsh
From: Doug Hellmann [d...@doughellmann.com] Thursday, November 20, 2014 3:51 PM
   On Nov 20, 2014, at 8:12 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
Hey y'all,
   
To avoid cross-posting, please inform your -infra / -operations buddies 
about this post.
   
We've just started thinking about where notification schema files should 
live and how they should be deployed. Kind of a tricky problem.  We could 
really use your input on this problem ...
   
The assumptions:
1. Schema files will be text files. They'll live in their own git repo 
(stackforge for now, ideally oslo eventually).
   Why wouldn’t they live in the repo of the application that generates the 
notification, like we do with the database schema and APIs defined by those 
apps?

That would mean downstream consumers (potentially in different languages) would 
need to pull all repos and extract just the schema parts. A separate repo would 
make it more accessible. 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-20 Thread Doug Hellmann

On Nov 20, 2014, at 3:40 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 From: Doug Hellmann [d...@doughellmann.com] Thursday, November 20, 2014 3:51 
 PM
 On Nov 20, 2014, at 8:12 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 Hey y'all,
 
 To avoid cross-posting, please inform your -infra / -operations buddies 
 about this post.
 
 We've just started thinking about where notification schema files should 
 live and how they should be deployed. Kind of a tricky problem.  We could 
 really use your input on this problem ...
 
 The assumptions:
 1. Schema files will be text files. They'll live in their own git repo 
 (stackforge for now, ideally oslo eventually).
 Why wouldn’t they live in the repo of the application that generates the 
 notification, like we do with the database schema and APIs defined by those 
 apps?
 
 That would mean downstream consumers (potentially in different languages) 
 would need to pull all repos and extract just the schema parts. A separate 
 repo would make it more accessible. 

OK, fair. Could we address that by publishing the schemas for an app in a tar 
ball using a post merge job?

Doug

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-20 Thread Doug Wiegley

On 11/19/14, 5:02 PM, Kyle Mestery mest...@mestery.com wrote:

On Tue, Nov 18, 2014 at 5:32 PM, Doug Wiegley do...@a10networks.com
wrote:
 Hi,

 so the specs repository would continue to be shared during the Kilo
cycle.

 One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?

My thinking here is that the specs repo is shared (at least initialy)
because the projects are under one umbrella, and we want them to work
closely together initially. This keeps everyone in the loop. Once
things mature, we can look at reevaluating this. Does that make sense?

Good by me.

Thanks,
Doug



Thanks,
Kyle

 Thanks,
 doug



 On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:

 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well
in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started
building
 out layers 4 through 7.  Initially, we thought that development would
float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4
through 7.
 In the last few cycles, we’ve also discovered that these concentrations
have
 different velocities and a single core team forces one to match the
other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of
the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer
2 and
 layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be
configured to
 manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs
repository
 would continue to be shared during the Kilo cycle.  The PTL and the
drivers
 team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the
Infra
 and Networking teams). The timing is designed to enable the proposed
REST
 changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by
Kyle
 Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be
co-gated
 to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of
Neutron, so
 integrated cross-project checks would not be required to enable it
during
 testing.
 - The split should not adversely impact operators and the Networking
program
 should maintain standard OpenStack compatibility and deprecation cycles.

 This proposal to divide into two repositories achieved a strong
consensus at
 the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 
https://git.openstack.org/cgit/openstack/governance/plain/reference/progr
ams.yaml
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-20 Thread Eoghan Glynn

Thanks for raising this Sandy,

Some questions/observations inline.

 Hey y'all,
 
 To avoid cross-posting, please inform your -infra / -operations buddies about 
 this post. 
 
 We've just started thinking about where notification schema files should live 
 and how they should be deployed. Kind of a tricky problem.  We could really 
 use your input on this problem ...
 
 The assumptions:
 1. Schema files will be text files. They'll live in their own git repo 
 (stackforge for now, ideally oslo eventually). 
 2. Unit tests will need access to these files for local dev
 3. Gating tests will need access to these files for integration tests
 4. Many different services are going to want to access these files during 
 staging and production. 
 5. There are going to be many different versions of these files. There are 
 going to be a lot of schema updates. 
 
 Some problems / options:
 a. Unlike Python, there is no simple pip install for text files. No version 
 control per se. Basically whatever we pull from the repo. The problem with a 
 git clone is we need to tweak config files to point to a directory and that's 
 a pain for gating tests and CD. Could we assume a symlink to some well-known 
 location?
 a': I suppose we could make a python installer for them, but that's a 
 pain for other language consumers.

Would it be unfair to push that burden onto the writers of clients
in other languages?

i.e. OpenStack, being largely python-centric, would take responsibility
for both:

  1. Maintaining the text versions of the schema in-tree (e.g. as json)

and:

  2. Producing a python-specific installer based on #1

whereas, the first Java-based consumer of these schema would take
#1 and package it up in their native format, i.e. as a jar or
OSGi bundle.

 b. In production, each openstack service could expose the schema files via 
 their REST API, but that doesn't help gating tests or unit tests. Also, this 
 means every service will need to support exposing schema files. Big 
 coordination problem.

I kind of liked this schemaURL endpoint idea when it was first
mooted at summit.

The attraction for me was that it would allow the consumer of the
notifications always have access to the actual version of schema
currently used on the emitter side, independent of the (possibly
out-of-date) version of the schema that the consumer has itself
installed locally via a static dependency.

However IIRC there were also concerns expressed about the churn
during some future rolling upgrades - i.e. if some instances of
the nova-api schemaURL endpoint are still serving out the old
schema, after others in the same deployment have already been
updated to emit the new notification version.

 c. In production, We could add an endpoint to the Keystone Service Catalog to 
 each schema file. This could come from a separate metadata-like service. 
 Again, yet-another-service to deploy and make highly available. 

Also to {puppetize|chef|ansible|...}-ize.

Yeah, agreed, we probably don't want to do down that road.

 d. Should we make separate distro packages? Install to a well known location 
 all the time? This would work for local dev and integration testing and we 
 could fall back on B and C for production distribution. Of course, this will 
 likely require people to add a new distro repo. Is that a concern?

Quick clarification ... when you say distro packages, do you mean 
Linux-distro-specific package formats such as .rpm or .deb?

Cheers,
Eoghan
 
 Personally, I'm leaning towards option D but I'm not sure what the 
 implications are. 
 
 We're early in thinking about these problems, but would like to start the 
 conversation now to get your opinions. 
 
 Look forward to your feedback.
 
 Thanks
 -Sandy
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Stable Compat jobs on Oslo Libraries

2014-11-20 Thread Matthew Treinish

Hi everyone,

Earlier today https://review.openstack.org/136017 merged which adds stable
compat jobs to most of the oslo libraries. This was done in reaction to 2
separate incidents in the past 2 days where both oslo.vmware and taskflow landed
changes that added new requirements which weren't in stable/icehouse global
requirements. This broke all the stable/icehouse dsvm jobs, which basically
blocked stable backports to icehouse, juno as well as all tempest and
devstack-gate changes. (among other things)

So in the short-term for future changes that add new requirements the
requirements have to be added to stable global requirements before they change
will be able to land on master. This has been the policy for all the libraries
that installed from git on stable branches (the client libs have stable compat
jobs for this reason) but was just not being enforced on oslo libs prior to
136017.


-Matt Treinish


pgpwXxuHVQZ3X.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Igor Kalnitsky
+1 for Dima's suggestion. We need to stop reinventing the wheel. There
are a lot tools around us, so let's pick one of them and will use it.

On Thu, Nov 20, 2014 at 10:13 PM, Dmitriy Shulyak dshul...@mirantis.com wrote:
 Guys, maybe we can use existing software, for example Sensu [1]?
 Maybe i am wrong, but i dont like the idea to start writing our small
 monitoring applications..
 Also something well designed and extendable can be reused for statistic
 collector


 1. https://github.com/sensu

 On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
 wrote:


 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  I didn't mean a robust monitoring system, just something simpler.
  Notifications is a good idea for FuelWeb.

 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-20 Thread Joe Gordon
On Thu, Nov 20, 2014 at 9:49 AM, Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com wrote:

 This is something we can call nitpiking or low priority.


This all seems like nitpicking for very little value. I think there are
better things we can be focusing on instead of thinking of new ways to nit
pick. So I am -1 on all of these.



 I would like we introduce 3 new hacking rules to enforce the cohesion
 and consistency in the base code.


 Using boolean assertions
 

 Some tests are written with equality assertions to validate boolean
 conditions which is something not clean:

   assertFalse([]) asserts an empty list
   assertEqual(False, []) asserts an empty list is equal to the boolean
   value False which is something not correct.

 Some changes has been started here but still needs to be appreciated
 by community:

  * https://review.openstack.org/#/c/133441/
  * https://review.openstack.org/#/c/119366/


 Using same order of arguments in equality assertions
 

 Most of the code is written with assertEqual(Expected, Observed) but
 some part are still using the opposite. Even if they provide any real
 optimisation using the same convention help reviewing and keep a
 better consistency in the code.

   assertEqual(Expected, Observed) OK
   assertEqual(Observed, Expected) KO

 A change has been started here but still needs to be appreciated by
 community:

  * https://review.openstack.org/#/c/119366/


 Using LOG.warn instead of LOG.warning
 -

 We can see many time reviewers -1ed a patch to ask developer to use
 'warn' instead of 'warning'. This will provide no optimisation
 but let's finally have something clear about what we have to use.

   LOG.warning: 74
   LOG.warn:319

 We probably want to use 'warn'

 Nothing has been started from what I know.


 Thanks,
 s.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Eoghan Glynn

 Aloha guardians of the API!
 
 I haven recently* reviewed a spec for neutron [1] proposing a distinct URI
 for returning resource count on list operations.
 This proposal is for selected neutron resources, but I believe the topic is
 general enough to require a guideline for the API working group. Your
 advice is therefore extremely valuable.
 
 In a nutshell the proposal is to retrieve resource count in the following
 way:
 GET /prefix/resource_name/count
 
 In my limited experience with RESTful APIs, I've never encountered one that
 does counting in this way. This obviously does not mean it's a bad idea.
 I think it's not great from a usability perspective to require two distinct
 URIs to fetch the first page and then the total number of elements. I
 reckon the first response page for a list operation might include also the
 total count. For example:
 
 {'resources': [{meh}, {meh}, {meh_again}],
  'resource_count': 55
  link_to_next_page}

How about allowing the caller to specify what level of detail
they require via the Accept header?

▶ GET /prefix/resource_name
  Accept: application/json; detail=concise

◀ HTTP/1.1 200 OK
  Content-Type: application/json
  {'resource_count': 55,
   other_collection-level_properties}

▶ GET /prefix/resource_name
  Accept: application/json

◀ HTTP/1.1 200 OK
  Content-Type: application/json
  {'resource_count': 55,
   other_collection-level_properties,
   'resources': [{meh}, {meh}, {meh_again}],
   link_to_next_page}

Same API, but the caller can choose not to receive the embedded
resource representations if they're only interested in the
collection-level properties such as the count (if the collection
is indeed countable).

Cheers,
Eoghan
 
 I am however completely open to consider other alternatives.
 What is your opinion on this matter?
 
 Regards,
 Salvatore
 
 
 * it's been 10 days now
 
 [1] https://review.openstack.org/#/c/102199/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Lukasz Oles
+1 also. There are so many monitoring tools, but maybe not something
written in ruby ;)

On Thu, Nov 20, 2014 at 10:40 PM, Igor Kalnitsky
ikalnit...@mirantis.com wrote:
 +1 for Dima's suggestion. We need to stop reinventing the wheel. There
 are a lot tools around us, so let's pick one of them and will use it.

 On Thu, Nov 20, 2014 at 10:13 PM, Dmitriy Shulyak dshul...@mirantis.com 
 wrote:
 Guys, maybe we can use existing software, for example Sensu [1]?
 Maybe i am wrong, but i dont like the idea to start writing our small
 monitoring applications..
 Also something well designed and extendable can be reused for statistic
 collector


 1. https://github.com/sensu

 On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
 wrote:


 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  I didn't mean a robust monitoring system, just something simpler.
  Notifications is a good idea for FuelWeb.

 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Łukasz Oleś

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Sergii Golovatiuk
I would rather compare features than ruby vs python. 



Best Regards,
Sergii Golovatiuk

 On 20 Nov 2014, at 23:20, Lukasz Oles lo...@mirantis.com wrote:
 
 +1 also. There are so many monitoring tools, but maybe not something
 written in ruby ;)
 
 On Thu, Nov 20, 2014 at 10:40 PM, Igor Kalnitsky
 ikalnit...@mirantis.com wrote:
 +1 for Dima's suggestion. We need to stop reinventing the wheel. There
 are a lot tools around us, so let's pick one of them and will use it.
 
 On Thu, Nov 20, 2014 at 10:13 PM, Dmitriy Shulyak dshul...@mirantis.com 
 wrote:
 Guys, maybe we can use existing software, for example Sensu [1]?
 Maybe i am wrong, but i dont like the idea to start writing our small
 monitoring applications..
 Also something well designed and extendable can be reused for statistic
 collector
 
 
 1. https://github.com/sensu
 
 On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
 wrote:
 
 
 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:
 
 I didn't mean a robust monitoring system, just something simpler.
 Notifications is a good idea for FuelWeb.
 
 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.
 
 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Łukasz Oleś
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Stable Compat jobs on Oslo Libraries

2014-11-20 Thread Joshua Harlow
Thanks, hopefully this helps in the short-term before we have the whole 
pinning situation worked out and implemented and such (which I believe 
is underway). These jobs will help make sure myself (and others) are 
aware of the things happening on stable icehouse and such; prior it 
wasn't so easily seen.


Matthew Treinish wrote:

Hi everyone,

Earlier today https://review.openstack.org/136017 merged which adds stable
compat jobs to most of the oslo libraries. This was done in reaction to 2
separate incidents in the past 2 days where both oslo.vmware and taskflow landed
changes that added new requirements which weren't in stable/icehouse global
requirements. This broke all the stable/icehouse dsvm jobs, which basically
blocked stable backports to icehouse, juno as well as all tempest and
devstack-gate changes. (among other things)

So in the short-term for future changes that add new requirements the
requirements have to be added to stable global requirements before they change
will be able to land on master. This has been the policy for all the libraries
that installed from git on stable branches (the client libs have stable compat
jobs for this reason) but was just not being enforced on oslo libs prior to
136017.


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] Question on periodic task

2014-11-20 Thread Ajay Kalambur (akalambu)
Hi
I have a question on
/rally/openstack/common/periodic_task.py

It looks like if I have a method decorated with @periodic_task my method would 
get scheduled in separate process every N seconds
Now let us say we have a scenario and this periodic_task how does it work when 
concurrency=2 for instance

Is the periodic task also scheduled in 2 separate process. I actually want only 
one periodic task process irrespective of concurrency count in scenario
Also as a scenario developer how can I pass arguments into the periodic task


Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Lukasz Oles
On Thu, Nov 20, 2014 at 11:28 PM, Sergii Golovatiuk
sgolovat...@mirantis.com wrote:
 I would rather compare features than ruby vs python.

It make sense only as long as you don't need to debug it




 Best Regards,
 Sergii Golovatiuk

 On 20 Nov 2014, at 23:20, Lukasz Oles lo...@mirantis.com wrote:

 +1 also. There are so many monitoring tools, but maybe not something
 written in ruby ;)

 On Thu, Nov 20, 2014 at 10:40 PM, Igor Kalnitsky
 ikalnit...@mirantis.com wrote:
 +1 for Dima's suggestion. We need to stop reinventing the wheel. There
 are a lot tools around us, so let's pick one of them and will use it.

 On Thu, Nov 20, 2014 at 10:13 PM, Dmitriy Shulyak dshul...@mirantis.com 
 wrote:
 Guys, maybe we can use existing software, for example Sensu [1]?
 Maybe i am wrong, but i dont like the idea to start writing our small
 monitoring applications..
 Also something well designed and extendable can be reused for statistic
 collector


 1. https://github.com/sensu

 On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala 
 tnapier...@mirantis.com
 wrote:


 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 I didn't mean a robust monitoring system, just something simpler.
 Notifications is a good idea for FuelWeb.

 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Łukasz Oleś

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Łukasz Oleś

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone]Flushing trusts

2014-11-20 Thread Kodera, Yasuo
Hi,

We can use keystone-manage token_flush to DELETE db-records of expired tokens.

Similarly, expired or deleted trust should be flushed to avoid wasting db.
But I don't know the way to do so.
Is there any tools or patches?

If there are some reasons that these records must not be deleted easily, tell 
me please.


Regards,

Yasuo Kodera


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] interests

2014-11-20 Thread Tracy Jones
If you have not updated the interest sheet (or if your interest has changed) 
please do so TODAY.  Let your top 3 interests on the interest tab of the 
spreadsheet.


https://docs.google.com/a/vmware.com/spreadsheets/d/1w01Z5J_XvTjbWBvPJ3SgzaQ9nXzOxSVVTGDjQtxAMZA/edit#gid=0


Tracy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Andrew Woodward
In order for this to occur, this means that the node has to be
bootstrapped and discover to nailgun, added to a cluster, and then
bootstrap again (reboot) and have the agent update with a different
nic order?

i think the issue will only occur when networks are mapped to the
interfaces, in this case the root cause is that the ethX name is used
as the key attribute for updates, but really the mac should be the
real key. If we change this behavior, then we should be able to have
it update properly regardless of the current interface name.

On Thu, Nov 20, 2014 at 12:01 PM, Dmitriy Shulyak dshul...@mirantis.com wrote:
 Hi folks,

 There was interesting research today on random nics ordering for nodes in
 bootstrap stage. And in my opinion it requires separate thread...
 I will try to describe what the problem is and several ways to solve it.
 Maybe i am missing the simple way, if you see it - please participate.
 Link to LP bug: https://bugs.launchpad.net/fuel/+bug/1394466

 When a node is booted first time it registers its interfaces in nailgun, see
 sample of data (only related to discussion parts):
 - name: eth0
   ip: 10.0.0.3/24
   mac: 00:00:03
 - name: eth1
   ip: None
   mac: 00:00:04
 * eth0 is admin network interface which was used for initial pxe boot

 We have networks, for simplicity lets assume there is 2:
  - admin
  - public
 When the node is added to cluster, in general you will see next schema:
 - name: eth0
   ip: 10.0.0.3/24
   mac: 00:00:03
   networks:
 - admin
 - public
 - name: eth1
   ip: None
   mac: 00:00:04

 At this stage node is still using default system with bootstrap profile, so
 there is no custom system with udev rules. And on next reboot there is no
 way to guarantee that network cards will be discovered by kernel in same
 order. If network cards is discovered in order that is diffrent from
 original and nics configuration is updated, it is possible to end up with:
 - name: eth0
   ip: None
   mac: 00:00:04
   networks:
 - admin
 - public
 - name: eth1
   mac: 00:00:03
   ip: 10.0.0.3/24
 Here you can see that networks is left connected to eth0 (in db). And
 ofcourse this schema doesnt reflect physical infrastructure. I hope it is
 clear now what is the problem.
 If you want to investigate it yourself, please find db dump in snapshot
 attached to the bug, you will be able to find described here case.
 What happens next:
 1. netcfg/choose_interface for kernel is misconfigured, and in my example it
 will be 00:00:04, but should be 00:00:03
 2. network configuration for l23network will be simply corrupted

 So - possible solutions:
 1. Reflect node interfaces ordering, with networks reassignment - Hard and
 hackish
 2. Do not update any interfaces info if networks assigned to them, then udev
 rules will be applied and nics will be reordered into original state - i
 would say easy and reliable solution
 3. Create cobbler system when node is booted first time, and add udev rules
 - it looks to me like proper solution, but requires design

 Please share your thoughts/ideas, afaik this issue is not rare on scale
 deployments.
 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] interests

2014-11-20 Thread Weijin Wang
Wrong mailing list?

Weijin

From: Tracy Jones tjo...@vmware.commailto:tjo...@vmware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, November 20, 2014 at 4:26 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] interests

If you have not updated the interest sheet (or if your interest has changed) 
please do so TODAY.  Let your top 3 interests on the interest tab of the 
spreadsheet.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] interests

2014-11-20 Thread Robert Collins
This seems to be a private doc... if its for public consumption
perhaps change the ACLS ?

-Rob

On 21 November 2014 13:26, Tracy Jones tjo...@vmware.com wrote:
 If you have not updated the interest sheet (or if your interest has changed)
 please do so TODAY.  Let your top 3 interests on the interest tab of the
 spreadsheet.


 https://docs.google.com/a/vmware.com/spreadsheets/d/1w01Z5J_XvTjbWBvPJ3SgzaQ9nXzOxSVVTGDjQtxAMZA/edit#gid=0


 Tracy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Morgan Fainberg
My primary concern (and use of the numbers) was to make sure it isn’t expected 
on all “list” operations, as (albeit specifically for Keystone) some of the 
projects + backends can’t really support it. If we really are tied to SQL-isms 
(which is a fine approach) we can make these consistent where it can be 
supported.

So, tl;dr, this is fine as long as we don’t start expecting a “count” 
capability for all list-like-operation APIs.

—Morgan

 On Nov 20, 2014, at 8:48 AM, Sean Dague s...@dague.net wrote:
 
 I'm looking at the Nova spec, and it seems very taylored to a specific
 GUI. I'm also not sure that 17128 errors is more useful than 500+ errors
 when presenting to the user (the following in my twitter stream made me
 think about that this morning -
 https://twitter.com/NINK/status/535299029383380992)
 
 500+ also better describes the significant figures we're talking about here.
 
   -Sean
 
 On 11/20/2014 11:28 AM, Morgan Fainberg wrote:
 The only thing I want to caution against is making a SQL-specific
 choice. In the case of some other backends, it may not be possible (for
 an extremely large dataset) to get a full count, where SQL does this
 fairly elegantly. For example, LDAP (in some cases) may have an
 administrative limit that will say that no more than 10,000 entries
 would be returned; likely you’re going to have an issue, since you need
 to issue the query and see how many things match, if you hit the overall
 limit you’ll get the same count every time (but possibly a different
 dataset).
 
 I want to be very careful that we’re not recommending functionality as a
 baseline that should be used as a pattern across all similar APIs,
 especially since we have some backends/storage systems that can’t
 elegantly always support it.
 
 Personally, I like Gerrit’s model (as Sean described) - with the above
 caveat that not all backends support this type of count.
 
 Cheers,
 Morgan
 
 On Nov 20, 2014, at 8:04 AM, Salvatore Orlando sorla...@nicira.com
 mailto:sorla...@nicira.com wrote:
 
 The Nova proposal appears to be identical to neutron's, at least from
 a consumer perspective.
 
 If I were to pick a winner, I'd follow Sean's advice regarding the
 'more' attribute in responses, and put the total number of resources
 there; I would also take Jay's advice of including the total only if
 requested with a query param. In this way a user can retrieve the
 total number of items regardless of the current pagination index (in
 my first post I suggested the total number should be returned only on
 the first page of results).
 
 Therefore one could ask for a total number of resources with something
 like the following:
 
 GET /some_resources?include_total=1
 
 and obtain a response like the following:
 
 {'resources': [{meh}, {meh}, {meh_again}],
  'something': {
   '_links': {'prev': ..., 'next': ...},
   'total': agazillion}
 }
 
 where the exact structure and naming of 'something' depends on the
 outcome of the discussion at [1]
 
 Salvatore
 
 [1] 
 https://review.openstack.org/#/c/133660/7/guidelines/representation_structure.rst
 
 On 20 November 2014 15:24, Christopher Yeoh cbky...@gmail.com
 mailto:cbky...@gmail.com wrote:
 
On Thu, 20 Nov 2014 14:47:16 +0100
Salvatore Orlando sorla...@nicira.com
mailto:sorla...@nicira.com wrote:
 
 Aloha guardians of the API!
 
 I haven recently* reviewed a spec for neutron [1] proposing a
 distinct URI for returning resource count on list operations.
 This proposal is for selected neutron resources, but I believe the
 topic is general enough to require a guideline for the API working
 group. Your advice is therefore extremely valuable.
 
 In a nutshell the proposal is to retrieve resource count in the
 following way:
 GET /prefix/resource_name/count
 
 In my limited experience with RESTful APIs, I've never encountered
 one that does counting in this way. This obviously does not mean
it's
 a bad idea. I think it's not great from a usability perspective to
 require two distinct URIs to fetch the first page and then the total
 number of elements. I reckon the first response page for a list
 operation might include also the total count. For example:
 
 {'resources': [{meh}, {meh}, {meh_again}],
 'resource_count': 55
 link_to_next_page}
 
 I am however completely open to consider other alternatives.
 What is your opinion on this matter?
 
FWIW there is a nova spec proposed for counting resources as
well (I think it might have been previously approved for Juno).
 
https://review.openstack.org/#/c/134279/
 
I haven't compared the two, but I can't think of a reason we'd
need to be any different between projects here.
 
Regards,
 
Chris
 
 
 Regards,
 Salvatore
 
 
 * it's been 10 days now
 
 [1] https://review.openstack.org/#/c/102199/
 
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Ryan Moe
Could this be caused by a case mismatch between the MAC address as it
exists in the database and the MAC that comes from the agent?

When the interfaces are updated with data from the agent we attempt to
match the MAC to an existing interface (
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/network/manager.py#L682-L690).
If that doesn't work we attempt to match by name. Looking at the data that
comes from the agent the MAC is always capitalized while in the database
it's lower-case. It seems like checking the MAC will fail and we'll fall
through to matching by name.

If the interfaces haven't been reordered then it doesn't matter whether or
not we match on name or MAC. However, if the order has changed we'll have
an issue. When the interfaces are matched by name they'll be updated with
the agent info. Because we matched by name that will stay the same and
we'll update the MAC instead, which isn't what we want.

e.g.
First boot:
1 | eth0 | 00:aa
2 | eth1  |00:bb

If the interface order is changed we'll have (as sent by the agent):
eth0 (00:BB)
eth1 (00:AA)

Because the MAC case doesn't match we'll end up matching by name. This
means we update the wrong database record. We have:

1 | eth0 | 00:bb
2 | eth1 | 00:aa

Instead of

1 | eth1 | 00:aa
2 | eth0 | 00:bb


On Thu, Nov 20, 2014 at 4:29 PM, Andrew Woodward xar...@gmail.com wrote:

 In order for this to occur, this means that the node has to be
 bootstrapped and discover to nailgun, added to a cluster, and then
 bootstrap again (reboot) and have the agent update with a different
 nic order?

 i think the issue will only occur when networks are mapped to the
 interfaces, in this case the root cause is that the ethX name is used
 as the key attribute for updates, but really the mac should be the
 real key. If we change this behavior, then we should be able to have
 it update properly regardless of the current interface name.

 On Thu, Nov 20, 2014 at 12:01 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:
  Hi folks,
 
  There was interesting research today on random nics ordering for nodes in
  bootstrap stage. And in my opinion it requires separate thread...
  I will try to describe what the problem is and several ways to solve it.
  Maybe i am missing the simple way, if you see it - please participate.
  Link to LP bug: https://bugs.launchpad.net/fuel/+bug/1394466
 
  When a node is booted first time it registers its interfaces in nailgun,
 see
  sample of data (only related to discussion parts):
  - name: eth0
ip: 10.0.0.3/24
mac: 00:00:03
  - name: eth1
ip: None
mac: 00:00:04
  * eth0 is admin network interface which was used for initial pxe boot
 
  We have networks, for simplicity lets assume there is 2:
   - admin
   - public
  When the node is added to cluster, in general you will see next schema:
  - name: eth0
ip: 10.0.0.3/24
mac: 00:00:03
networks:
  - admin
  - public
  - name: eth1
ip: None
mac: 00:00:04
 
  At this stage node is still using default system with bootstrap profile,
 so
  there is no custom system with udev rules. And on next reboot there is no
  way to guarantee that network cards will be discovered by kernel in same
  order. If network cards is discovered in order that is diffrent from
  original and nics configuration is updated, it is possible to end up
 with:
  - name: eth0
ip: None
mac: 00:00:04
networks:
  - admin
  - public
  - name: eth1
mac: 00:00:03
ip: 10.0.0.3/24
  Here you can see that networks is left connected to eth0 (in db). And
  ofcourse this schema doesnt reflect physical infrastructure. I hope it is
  clear now what is the problem.
  If you want to investigate it yourself, please find db dump in snapshot
  attached to the bug, you will be able to find described here case.
  What happens next:
  1. netcfg/choose_interface for kernel is misconfigured, and in my
 example it
  will be 00:00:04, but should be 00:00:03
  2. network configuration for l23network will be simply corrupted
 
  So - possible solutions:
  1. Reflect node interfaces ordering, with networks reassignment - Hard
 and
  hackish
  2. Do not update any interfaces info if networks assigned to them, then
 udev
  rules will be applied and nics will be reordered into original state - i
  would say easy and reliable solution
  3. Create cobbler system when node is booted first time, and add udev
 rules
  - it looks to me like proper solution, but requires design
 
  Please share your thoughts/ideas, afaik this issue is not rare on scale
  deployments.
  Thank you
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-20 Thread Richard Jones
On Fri Nov 21 2014 at 4:06:51 AM Thomas Goirand z...@debian.org wrote:

 On 11/17/2014 06:54 PM, Radomir Dopieralski wrote:
  - A tool, probably a script, that would help packaging the Bower
  packages into DEB/RPM packages. I suspect the Debian/Fedora packagers
  already have a semi-automatic solution for that.

 Nop. Bower isn't even packaged in Debian. Though I may try to do it
 (when I'm done with other Mirantis stuff like packaging Fuel for
 Debian...).


Just to be clear, it's not bower itself (the command-line tool) that needs
packaging, just the components that bower itself packages.



 On 11/18/2014 07:59 AM, Richard Jones wrote:
  I was envisaging us creating a tool which generates xstatic packages
  from bower packages. I'm not the first to think along these
  lines
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031042.html

 I think that's a very good idea!

  I wrote the tool today, and you can find it here:
 
  https://github.com/r1chardj0n3s/flaming-shame

 AWESOME ! :)
 Then now, everyone is happy. Thank you.


Well, no, but at least it exists ;)



 On 11/18/2014 04:22 PM, Radomir Dopieralski wrote:
  If we use Bower, we don't need to use Xstatic. It would be pure
  overhead. Bower already takes care of tracking releases and versions,
  and of bundling the files. All we need is a simple line in the
  settings.py telling Django where it puts all the files -- we don't
  really need Xstatic just for that. The packagers can then take those
  Bower packages and turn them into system packages, and just add/change
  the paths in settings.py to where they put the files. All in one
  place.

 The issue is that there's often not just a single path, but a full
 directory structure to address. That is easily managed with a Debian
 xstatic package, not sure how it would be with Bower.


I'm not sure what the difference is (unless it's just related to the
Debian-specific historical issue you raise below). xstatic and bower are
remarkably similar a directory to be packaged and some meta-data describing
it.



 On 11/18/2014 06:36 PM, Richard Jones wrote:
  I guess I got the message that turning bower packages into system
  packages was something that the Linux packagers were not keen on.

 What I'm not a fan of, is that we'll have external dependencies being
 bumped all the time, with unexpected consequences. At least, with
 xstatic packages, we control what's going on (though I understand the
 overhead work problem).


The dependencies won't be bumped any faster than reviewers allow, though I
realise that's not necessarily going to make you sleep any easier. Hmm.



 By the way, I went into bower.io as I wanted to have a look. How do I
 download a binary package for let's say jasmin? When searching, it
 just links to github...


Again; bower is not npm! Jasmine is a command-line program which is
packaged by npm. Bower packages bundles of stuff that are included in web
applications. bower itself, a command-line tool, is packaged by npm, but
itself manages other packages which are not command-line things, but just
bundles of css, javascript, images, fonts, etc. that are resources for web
applications to use.



 On 11/19/2014 12:14 AM, Radomir Dopieralski wrote:
  We would replace that with:
 
  STATICFILES_DIRS = [
  ('horizon/lib/angular',
 os.path.join(BASE_DIR, 'bower_modules/angular'),
  ...
  ]

 This would only work if upstream package directory structure is the same
 as the one in the distribution. For historical reason, that's
 unfortunately often not the case (sometimes because we want to keep
 backward compatibility in the distro because of reverse dependencies),
 and just changing the path wont make it work.

 On 11/19/2014 03:43 AM, Richard Jones wrote:
  +1 to all that, except I'd recommend using django-bower to handle the
  static collection stuff. It's not documented but django-bower has a
  setting BOWER_COMPONENTS_ROOT which would make the above transition much
  simpler. You leave it alone for local dev, and packagers setup
  BOWER_COMPONENTS_ROOT to '/usr/lib/javascript/' or wherever.

 s/lib/share/

 However, I'm almost sure that wont be enough to make it work. For
 example, in Debian, we have /usr/share/javascript/angular.js, not just
 /usr/share/javascript/angular. So django-bower would be searching on the
 wrong path.


That is unfortunate. It may be that Debian therefore has to special-case
angular to handle that case.

I think the general idea of following the component pattern set by bower
(separate directories with no risk of conflicts, and using the bower.json
meta-data to allow automatic configuration of the component) is too good to
dismiss though. Far less work for everyone, including packagers.

Perhaps the new packages should have bower in their name?



 On 11/19/2014 12:25 PM, Richard Jones wrote:
  In their view, bower components don't need to be in global-requirements:
 
  - there are no other projects that use bower components, so we 

Re: [openstack-dev] [Rally] Question on periodic task

2014-11-20 Thread Boris Pavlovic
Hi Ajay,


I am not sure why you are looking that part at all.
everything in openstack/common/* is oslo-incubator code.
Actually that method is not used in Rally yet, except Rally as a Service
part that doesn't work yet.

As a scenario developer I think you should be able to find everything here:
https://github.com/stackforge/rally/tree/master/rally/benchmark

So I really don't the case when you need to pass something to periodic
task.. It's not that task


Best regards,
Boris Pavlovic






On Fri, Nov 21, 2014 at 3:36 AM, Ajay Kalambur (akalambu) 
akala...@cisco.com wrote:

  Hi
 I have a question on
 /rally/openstack/common/periodic_task.py

  It looks like if I have a method decorated with @periodic_task my method
 would get scheduled in separate process every N seconds
 Now let us say we have a scenario and this periodic_task how does it work
 when concurrency=2 for instance

  Is the periodic task also scheduled in 2 separate process. I actually
 want only one periodic task process irrespective of concurrency count in
 scenario
 Also as a scenario developer how can I pass arguments into the periodic
 task


  Ajay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] wsme is missing dep on ipaddr

2014-11-20 Thread Matthew Thode
https://github.com/stackforge/wsme/blob/0.6.2/setup.py (also in master)

  File /usr/lib64/python2.7/site-packages/wsme/types.py, line 15, in
module
import ipaddr as ipaddress

https://github.com/stackforge/wsme/blob/master/wsme/types.py#L12-L15

-- 
-- Matthew Thode

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Question on periodic task

2014-11-20 Thread Ajay Kalambur (akalambu)
Ok the action I wanted to perform was for HA I.e execute a scenario like VM 
boot and in parallel in a separate process , ssh and restart controller node 
for instance
I thought periodic task would be useful for that. I guess I need to look at 
some other way of performing this
Ajay


From: Boris Pavlovic bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, November 20, 2014 at 7:03 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Rally] Question on periodic task

Hi Ajay,


I am not sure why you are looking that part at all.
everything in openstack/common/* is oslo-incubator code.
Actually that method is not used in Rally yet, except Rally as a Service part 
that doesn't work yet.

As a scenario developer I think you should be able to find everything here:
https://github.com/stackforge/rally/tree/master/rally/benchmark

So I really don't the case when you need to pass something to periodic task.. 
It's not that task


Best regards,
Boris Pavlovic






On Fri, Nov 21, 2014 at 3:36 AM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Hi
I have a question on
/rally/openstack/common/periodic_task.py

It looks like if I have a method decorated with @periodic_task my method would 
get scheduled in separate process every N seconds
Now let us say we have a scenario and this periodic_task how does it work when 
concurrency=2 for instance

Is the periodic task also scheduled in 2 separate process. I actually want only 
one periodic task process irrespective of concurrency count in scenario
Also as a scenario developer how can I pass arguments into the periodic task


Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-20 Thread Thomas Goirand
On 11/21/2014 10:52 AM, Richard Jones wrote:
 On 11/18/2014 04:22 PM, Radomir Dopieralski wrote:
  If we use Bower, we don't need to use Xstatic. It would be pure
  overhead. Bower already takes care of tracking releases and versions,
  and of bundling the files. All we need is a simple line in the
  settings.py telling Django where it puts all the files -- we don't
  really need Xstatic just for that. The packagers can then take those
  Bower packages and turn them into system packages, and just add/change
  the paths in settings.py to where they put the files. All in one
  place.
 
 The issue is that there's often not just a single path, but a full
 directory structure to address. That is easily managed with a Debian
 xstatic package, not sure how it would be with Bower.
 
 
 I'm not sure what the difference is (unless it's just related to the
 Debian-specific historical issue you raise below). xstatic and bower are
 remarkably similar a directory to be packaged and some meta-data
 describing it.

Let me explain again then.

Let's say there's python-xstatic-foo, and libjs-foo in Debian. If the
directory structure of libjs-foo is very different from xstatic-foo, I
can address that issue with symlinks within the xstatic package. Just
changing the BASE_DIR may not be enough, as libjs-foo may have files
organized in a very different way than in the upstream package for foo.

 By the way, I went into bower.io http://bower.io as I wanted to
 have a look. How do I
 download a binary package for let's say jasmin? When searching, it
 just links to github...
 
 
 Again; bower is not npm! Jasmine is a command-line program which is
 packaged by npm. Bower packages bundles of stuff that are included in
 web applications. bower itself, a command-line tool, is packaged by npm,
 but itself manages other packages which are not command-line things, but
 just bundles of css, javascript, images, fonts, etc. that are resources
 for web applications to use.

Sure. But how do I download a bower package then?

 This would only work if upstream package directory structure is the same
 as the one in the distribution. For historical reason, that's
 unfortunately often not the case (sometimes because we want to keep
 backward compatibility in the distro because of reverse dependencies),
 and just changing the path wont make it work.
 
 On 11/19/2014 03:43 AM, Richard Jones wrote:
  +1 to all that, except I'd recommend using django-bower to handle the
  static collection stuff. It's not documented but django-bower has a
  setting BOWER_COMPONENTS_ROOT which would make the above
 transition much
  simpler. You leave it alone for local dev, and packagers setup
  BOWER_COMPONENTS_ROOT to '/usr/lib/javascript/' or wherever.
 
 s/lib/share/
 
 However, I'm almost sure that wont be enough to make it work. For
 example, in Debian, we have /usr/share/javascript/angular.__js, not just
 /usr/share/javascript/angular. So django-bower would be searching on the
 wrong path.
 
 
 That is unfortunate. It may be that Debian therefore has to special-case
 angular to handle that case.

I wasn't making a point about Angular here. It's a general issue we have
to take care of addressing correctly.

 I think the general idea of following the component pattern set by bower
 (separate directories with no risk of conflicts, and using the
 bower.json meta-data to allow automatic configuration of the component)
 is too good to dismiss though. Far less work for everyone, including
 packagers.
 
 Perhaps the new packages should have bower in their name?

There's already libjs-angularjs and a bunch of python-xstatic-angular
packages in Debian. I'm not sure that it is necessary to *also* have a
bower-angularjs packages. Why would I need to do that? Just for the
.json file? If that's the case, then couldn't I just add the bower.json
file in the existing libjs-* packages? I'm not sure I get the point here...

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-20 Thread Matsuda, Kenichiro
Hi,

Thank you for the info.

I was able to get replication info easily by swift-recon API.
But, I wasn't able to judge whether no failure from recon info of 
object-replicator.

Could you please advise me for a way of get object-replicator's failure info?

[replication info from recon]
* account
--
# curl http://192.168.1.11:6002/recon/replication/account | python -mjson.tool
{
replication_last: 1416354262.7157061,
replication_stats: {
attempted: 20,
diff: 0,
diff_capped: 0,
empty: 0,
failure: 20,
hashmatch: 0,
no_change: 40,
remote_merge: 0,
remove: 0,
rsync: 0,
start: 1416354240.9761429,
success: 40,
ts_repl: 0
},
replication_time: 21.739563226699829
}
--

* container
--
# curl http://192.168.1.11:6002/recon/replication/container | python -mjson.tool
{
replication_last: 1416353436.9448521,
replication_stats: {
attempted: 13346,
diff: 0,
diff_capped: 0,
empty: 0,
failure: 870,
hashmatch: 0,
no_change: 1908,
remote_merge: 0,
remove: 0,
rsync: 0,
start: 1416349377.3627851,
success: 1908,
ts_repl: 0
},
replication_time: 4059.5820670127869
}
--

* object
--
# curl http://192.168.1.11:6002/recon/replication | python -mjson.tool
{
object_replication_last: 1416334368.60865,
object_replication_time: 2316.5563162644703 
} 
# curl http://192.168.1.11:6002/recon/replication/object | python -mjson.tool
{
object_replication_last: 1416334368.60865,
object_replication_time: 2316.5563162644703 
}
--

Best Regards,
Kenichiro Matsuda.


From: Clay Gerrard [mailto:clay.gerr...@gmail.com] 
Sent: Friday, November 21, 2014 4:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [swift] a way of checking replicate completion on 
swift cluster

You might check if the swift-recon tool has the data you're looking for.  It 
can report the last completed replication pass time across nodes in the ring.

On Thu, Nov 20, 2014 at 1:28 AM, Matsuda, Kenichiro 
matsuda_keni...@jp.fujitsu.com wrote:
Hi,

I would like to know about a way of checking replicate completion on swift 
cluster.
(e.g. after rebalanced Ring)

I found the way of using swift-dispersion-report from Administrator's Guide.
But, this way is not enough, because swift-dispersion-report can't checking
replicate completion for other data that made by not swift-dispersion-populate.

And also, I found the way of using replicator's logs from QA.
But, I would like to more easy way, because check of below logs is very heavy.

  (account/container/object)-replicator * All storage node on swift cluster

Could you please advise me for it?

Findings:
  Administrator's Guide  Cluster Health
    http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
  how to check replicator work complete
    
https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/

Best Regards,
Kenichiro Matsuda.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-20 Thread Richard Jones
On 21 November 2014 16:12, Thomas Goirand z...@debian.org wrote:

 On 11/21/2014 10:52 AM, Richard Jones wrote:
  On 11/18/2014 04:22 PM, Radomir Dopieralski wrote:
   If we use Bower, we don't need to use Xstatic. It would be pure
   overhead. Bower already takes care of tracking releases and
 versions,
   and of bundling the files. All we need is a simple line in the
   settings.py telling Django where it puts all the files -- we don't
   really need Xstatic just for that. The packagers can then take
 those
   Bower packages and turn them into system packages, and just
 add/change
   the paths in settings.py to where they put the files. All in one
   place.
 
  The issue is that there's often not just a single path, but a full
  directory structure to address. That is easily managed with a Debian
  xstatic package, not sure how it would be with Bower.
 
 
  I'm not sure what the difference is (unless it's just related to the
  Debian-specific historical issue you raise below). xstatic and bower are
  remarkably similar a directory to be packaged and some meta-data
  describing it.

 Let me explain again then.

 Let's say there's python-xstatic-foo, and libjs-foo in Debian. If the
 directory structure of libjs-foo is very different from xstatic-foo, I
 can address that issue with symlinks within the xstatic package. Just
 changing the BASE_DIR may not be enough, as libjs-foo may have files
 organized in a very different way than in the upstream package for foo.


OK, so python-xstatic-foo can depend on libjs-foo and just symlink, fair
enough. I'm still not sure what makes bower unique in this respect,
although it'd be nice to avoid creating additional packages just to symlink
things around for bower, which I think is what you're getting at.


 By the way, I went into bower.io http://bower.io as I wanted to
  have a look. How do I
  download a binary package for let's say jasmin? When searching, it
  just links to github...
 
 
  Again; bower is not npm! Jasmine is a command-line program which is
  packaged by npm. Bower packages bundles of stuff that are included in
  web applications. bower itself, a command-line tool, is packaged by npm,
  but itself manages other packages which are not command-line things, but
  just bundles of css, javascript, images, fonts, etc. that are resources
  for web applications to use.

 Sure. But how do I download a bower package then?


I'm not sure I understand the meaning behind this question. bower install
angular downloads a bower package called angular.


 This would only work if upstream package directory structure is the
 same
  as the one in the distribution. For historical reason, that's
  unfortunately often not the case (sometimes because we want to keep
  backward compatibility in the distro because of reverse
 dependencies),
  and just changing the path wont make it work.
 
  On 11/19/2014 03:43 AM, Richard Jones wrote:
   +1 to all that, except I'd recommend using django-bower to handle
 the
   static collection stuff. It's not documented but django-bower has a
   setting BOWER_COMPONENTS_ROOT which would make the above
  transition much
   simpler. You leave it alone for local dev, and packagers setup
   BOWER_COMPONENTS_ROOT to '/usr/lib/javascript/' or wherever.
 
  s/lib/share/
 
  However, I'm almost sure that wont be enough to make it work. For
  example, in Debian, we have /usr/share/javascript/angular.__js, not
 just
  /usr/share/javascript/angular. So django-bower would be searching on
 the
  wrong path.
 
 
  That is unfortunate. It may be that Debian therefore has to special-case
  angular to handle that case.

 I wasn't making a point about Angular here. It's a general issue we have
 to take care of addressing correctly.

  I think the general idea of following the component pattern set by bower
  (separate directories with no risk of conflicts, and using the
  bower.json meta-data to allow automatic configuration of the component)
  is too good to dismiss though. Far less work for everyone, including
  packagers.
 
  Perhaps the new packages should have bower in their name?

 There's already libjs-angularjs and a bunch of python-xstatic-angular
 packages in Debian. I'm not sure that it is necessary to *also* have a
 bower-angularjs packages. Why would I need to do that? Just for the
 .json file? If that's the case, then couldn't I just add the bower.json
 file in the existing libjs-* packages? I'm not sure I get the point here...


The angular component that bower installs looks like this:

$ ls -CAF bower_components/angular
.bower.json angular-csp.css angular.min.js angular.min.js.map package.json
README.md angular.js angular.min.js.gzip bower.json

The bootstrap component looks like this:

$ ls -CAF bower_components/boot/
.bower.json LICENSE bower.json fonts/ js/ package.json
Gruntfile.js README.md dist/ grunt/ 

Re: [openstack-dev] [all] Supported (linux) distributions

2014-11-20 Thread Tony Breeds
On Fri, Nov 14, 2014 at 09:43:45AM +0100, Thierry Carrez wrote:

Thanks Thierry

 If you are after the list of distributions with a well-known packaging
 of OpenStack then yes, the union of those two lists + Gentoo sounds
 accurate to me.

I have to admit I expect this to be a litle more contentious ;P

Okay I'll use that list as the starting point.

Yours Tony.


pgpSDpZLtVvEA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] - puka dependency

2014-11-20 Thread raghavendra.lad
Hi All,

I am new to Murano trying to implement on Openstack Juno with Ubuntu 14.04 LTS 
version. However It needs puka 0.7 as dependency.
I would appreciate any help with the build guides. I also find Github prompts 
for username and password while I try installing packages.



Warm Regards,
Raghavendra Lad





This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Dmitriy Shulyak


 When the interfaces are updated with data from the agent we attempt to
 match the MAC to an existing interface (
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/network/manager.py#L682-L690).
 If that doesn't work we attempt to match by name. Looking at the data that
 comes from the agent the MAC is always capitalized while in the database
 it's lower-case. It seems like checking the MAC will fail and we'll fall
 through to matching by name.


 Thank you! I think it is correct, and I made the problem more complicated
than it is ))
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] New blueprints created based on Paris Summit results

2014-11-20 Thread Renat Akhmerov
Hi,

Thanks for providing us great feedback at the summit in Paris! It was really 
great to see so many people coming to our sessions and interested in Mistral 
capabilities. I’ve tried to do my best to digest all I heard from you and 
materialise all the wishes in form of blueprints at Launchpad.

So here they are:
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-visualization 
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-visualization
https://blueprints.launchpad.net/mistral/+spec/mistral-install-docs 
https://blueprints.launchpad.net/mistral/+spec/mistral-install-docs
https://blueprints.launchpad.net/mistral/+spec/mistral-dashboard-crud-operations
 
https://blueprints.launchpad.net/mistral/+spec/mistral-dashboard-crud-operations
https://blueprints.launchpad.net/mistral/+spec/mistral-yaml-request-body 
https://blueprints.launchpad.net/mistral/+spec/mistral-yaml-request-body
https://blueprints.launchpad.net/mistral/+spec/mistral-alternative-rpc 
https://blueprints.launchpad.net/mistral/+spec/mistral-alternative-rpc
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values 
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
https://blueprints.launchpad.net/mistral/+spec/mistral-for-each-cardinality 
https://blueprints.launchpad.net/mistral/+spec/mistral-for-each-cardinality
https://blueprints.launchpad.net/mistral/+spec/mistral-javascript-action 
https://blueprints.launchpad.net/mistral/+spec/mistral-javascript-action
https://blueprints.launchpad.net/mistral/+spec/mistral-mysql-configuration-docs 
https://blueprints.launchpad.net/mistral/+spec/mistral-mysql-configuration-docs
https://blueprints.launchpad.net/mistral/+spec/mistral-no-op-task 
https://blueprints.launchpad.net/mistral/+spec/mistral-no-op-task
https://blueprints.launchpad.net/mistral/+spec/mistral-non-distributable-actions
 
https://blueprints.launchpad.net/mistral/+spec/mistral-non-distributable-actions
https://blueprints.launchpad.net/mistral/+spec/mistral-separate-executor-packaging
 
https://blueprints.launchpad.net/mistral/+spec/mistral-separate-executor-packaging
https://blueprints.launchpad.net/mistral/+spec/mistral-zaqar-integration 
https://blueprints.launchpad.net/mistral/+spec/mistral-zaqar-integration
https://blueprints.launchpad.net/mistral/+spec/mistral-workbook-builder 
https://blueprints.launchpad.net/mistral/+spec/mistral-workbook-builder
https://blueprints.launchpad.net/mistral/+spec/mistral-systemd-trigger 
https://blueprints.launchpad.net/mistral/+spec/mistral-systemd-trigger
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-decider 
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-decider
https://blueprints.launchpad.net/mistral/+spec/mistral-multicloud 
https://blueprints.launchpad.net/mistral/+spec/mistral-multicloud
https://blueprints.launchpad.net/mistral/+spec/mistral-standalone 
https://blueprints.launchpad.net/mistral/+spec/mistral-standalone

Feel free to join the discussion about whatever related to them. If you find 
yourself interested in helping us implement some of them please don’t hesitate 
reaching us out in IRC (#openstack-mistral) or email.

Thanks!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev