Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-31 Thread Flavio Percoco

On 30/10/14 12:25 -0500, Ben Nemec wrote:

On 10/23/2014 04:18 PM, Doug Hellmann wrote:


On Oct 23, 2014, at 2:56 AM, Flavio Percoco fla...@redhat.com wrote:


On 10/22/2014 08:15 PM, Doug Hellmann wrote:

The application projects are dropping python 2.6 support during Kilo, and I’ve 
had several people ask recently about what this means for Oslo. Because we 
create libraries that will be used by stable versions of projects that still 
need to run on 2.6, we are going to need to maintain support for 2.6 in Oslo 
until Juno is no longer supported, at least for some of our projects. After 
Juno’s support period ends we can look again at dropping 2.6 support in all of 
the projects.


I think these rules cover all of the cases we have:

1. Any Oslo library in use by an API client that is used by a supported stable 
branch (Icehouse and Juno) needs to keep 2.6 support.

2. If a client library needs a library we graduate from this point forward, we 
will need to ensure that library supports 2.6.

3. Any Oslo library used directly by a supported stable branch of an 
application needs to keep 2.6 support.

4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one of 
the previous rules applies.

5. The stable/icehouse and stable/juno branches of the incubator need to retain 
2.6 support for as long as those versions are supported.

6. The master branch of the incubator needs to retain 2.6 support until we 
graduate all of the modules that will go into libraries used by clients.


A few examples:

- oslo.utils was graduated during Juno and is used by some of the client 
libraries, so it needs to maintain python 2.6 support.

- oslo.config was graduated several releases ago and is used directly by the 
stable branches of the server projects, so it needs to maintain python 2.6 
support.

- oslo.log is being graduated in Kilo and is not yet in use by any projects, so 
it does not need python 2.6 support.

- oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
both are used by client projects, so they need to keep python 2.6 support. At 
that point we can evaluate the code that remains in the incubator and see if 
we’re ready to turn of 2.6 support there.


Let me know if you have questions about any specific cases not listed in the 
examples.


The rules look ok to me but I'm a bit worried that we might miss
something in the process due to all these rules being in place. Would it
be simpler to just say we'll keep py2.6 support in oslo for Kilo and
drop it in Igloo (or L?) ?


I think we have to actually wait for M, don’t we (K  L represents 1 year where 
J is supported, M is the first release where J is not supported and 2.6 can be 
fully dropped).

But to your point of keeping it simple and saying we support 2.6 in all of Oslo 
until no stable branches use it, that could work. I think in practice we’re not 
in any hurry to drop the 2.6 tests from existing Oslo libs, and we just won’t 
add them to new ones, which gives us basically the same result.


A bit late to this discussion, but if we don't add py26 jobs to new
libraries, we need to be very careful that they never get pulled in as a
transitive dep to an existing lib that does need py26 support.  Since I
think some of the current libs are still using incubator modules, it's
possible this could happen as we transition to newly released libs.

So if we're going for safe and simple, we should probably also keep py26
jobs for everything until EOL for Juno.


Fully agree!

The more I think about it, the more I'm convinced we should keep py26
in oslo until EOL Juno. It'll take time, it may be painful but it'll
be simpler to explain and more importantly it'll be simpler to do.

Keeping this simple will also help us with welcoming more reviewers in
our team. It's already complex enough to explain what oslo-inc is and
why there are oslo libraries.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpDMxnup6ePq.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Permissions differences for glance image-create between Icehouse and Juno

2014-10-31 Thread Flavio Percoco

On 31/10/14 04:57 +, Nikhil Komawar wrote:

Hi Jay,

Wanted to clarify a few things around this:

1. are you using --is_public or --is-public option?
2. are you using stable/juno branch or it is a rc(1/2/3) from ubuntu packages?

After trying out:

glance image-create --is-public=True --disk-format qcow2 --container-format 
bare --name foobar --name foobar --file 
/opt/stack/data/glance/images/5be32fc4-e063-4032-b248-516c7ab7116b

the command seems to be working on the latest devstack setup with the branch 
stable/juno used for glance.


Did you test this with an admin user?


The policy file in your paste looks fine too.

As nothing out of the ordinary seems to be wrong, hope this intuitive 
suggestion helps: the filesystem store config may be mismatched (possibly there 
are 2 options).



I haven't had the chance to test this but my guess is that Jay's issue
may be caused by an upgrade from Icehouse - Juno.

I'll hopefully be able to give this a try today.
Fla.



Thanks,
-Nikhil


From: Tom Fifield [t...@openstack.org]
Sent: Monday, October 27, 2014 9:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Permissions differences for glance 
image-create between Icehouse and Juno

Sorry, early morning!

I can confirm that in your policy.json there is:

   publicize_image: role:admin,

which seems to match what's needed :)

Regards,


Tom

On 28/10/14 10:18, Jay Pipes wrote:

Right, but as you can read below, I'm using an admin to do the operation...

Which is why I'm curious what exactly I'm supposed to do :)

-jay

On 10/27/2014 09:04 PM, Tom Fifield wrote:

This was covered in the release notes for glance, under Upgrade notes:

https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_3

* The ability to upload a public image is now admin-only by default. To
continue to use the previous behaviour, edit the publicize_image flag in
etc/policy.json to remove the role restriction.

Regards,


Tom

On 28/10/14 01:22, Jay Pipes wrote:

Hello Glancers,

Peter and I are having issues working with a Juno Glance endpoint.
Specifically, a glance image-create ... --is_public=True CLI command
that *was* working in our Icehouse cloud is now failing in our Juno
cloud with a 403 Forbidden.

The specific command in question is:

glance image-create --name cirros-0.3.2-x86_64 --file
/var/tmp/cirros-0.3.2-x86_64-disk.img --disk-format qcow2
--container-format bare --is_public=True

If we take off the is_public=True, everything works just fine. We are
executing the above command as a user with a user called admin having
the role admin in a project called admin.

We have enabled debug=True conf option in both glance-api.conf and
glance-registry.conf, and unfortunately, there is no log output at all,
other than spitting out the configuration option settings on daemon
startup and a few messages like Loaded policy rules: ... which don't
actually provide any useful information about policy *decisions* that
are made... :(

Any help is most appreciated. Our policy.json file is the stock one that
comes in the Ubuntu Cloud Archive glance packages, i.e.:

http://paste.openstack.org/show/125420/

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpfEdNA4EhNh.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-10-31 Thread Sam Stoelinga
Are there any resources available or proven examples on using external
tools which call nova evacuate?

For example use a monitoring tool to detect node failure and let the
monitoring tool call evacuate on the instances which were running on the
failed compute node.

On Mon, Mar 3, 2014 at 11:28 PM, Jay Lau jay.lau@gmail.com wrote:


 Yes, it would be great if we can have a simple framework for future run
 time policy plugins. ;-)

 2014-03-03 23:12 GMT+08:00 laserjetyang laserjety...@gmail.com:

 there are a lot of rules for HA or LB, so I think it might be a better
 idea to scope the framework and leave the policy as plugins.


 On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski andrew.la...@rackspace.com
  wrote:

 On 03/01/14 at 07:24am, Jay Lau wrote:

 Hey,

 Sorry to bring this up again. There are also some discussions here:
 http://markmail.org/message/5zotly4qktaf34ei

 You can also search [Runtime Policy] in your email list.

 Not sure if we can put this to Gantt and enable Gantt provide both
 initial
 placement and rum time polices like HA, load balance etc.


 I don't have an opinion at the moment as to whether or not this sort of
 functionality belongs in Gantt, but there's still a long way to go just to
 get the scheduling functionality we want out of Gantt and I would like to
 see the focus stay on that.





 Thanks,

 Jay



 2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:

  On 02/20/2014 06:04 PM, Sean Dague wrote:
  On 02/20/2014 05:32 PM, Russell Bryant wrote:
  On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
  Hi,
 
  Would like to know if there's any interest on having
  'automatic evacuation' feature when a compute node goes down. I
  found 3 bps related to this topic: [1] Adding a periodic task
  and using ServiceGroup API for compute-node status [2] Using
  ceilometer to trigger the evacuate api. [3] Include some kind
  of H/A plugin  by using a 'resource optimization service'
 
  Most of those BP's have comments like 'this logic should not
  reside in nova', so that's why i am asking what should be the
  best approach to have something like that.
 
  Should this be ignored, and just rely on external monitoring
  tools to trigger the evacuation? There are complex scenarios
  that require lot of logic that won't fit into nova nor any
  other OS component. (For instance: sometimes it will be faster
  to reboot the node or compute-nova than starting the
  evacuation, but if it fail X times then trigger an evacuation,
  etc )
 
  Any thought/comment// about this?
 
  Regards Leandro
 
  [1]
 
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-
 when-host-broken
 
 
 [2]
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-
 instance-automatically
 
 
 [3]
 
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 
 
 
 My opinion is that I would like to see this logic done outside of Nova.
 
  Right now Nova is the only service that really understands the
  compute topology of hosts, though it's understanding of liveness is
  really not sufficient to handle this kind of HA thing anyway.
 
  I think that's the real problem to solve. How to provide
  notifications to somewhere outside of Nova on host death. And the
  question is, should Nova be involved in just that part, keeping
  track of node liveness and signaling up for someone else to deal
  with it? Honestly that part I'm more on the fence about. Because
  putting another service in place to just handle that monitoring
  seems overkill.
 
  I 100% agree that all the policy, reacting, logic for this should
  be outside of Nova. Be it Heat or somewhere else.

 I think we agree.  I'm very interested in continuing to enhance Nova
 to make sure that the thing outside of Nova has all of the APIs it
 needs to get the job done.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]

2014-10-31 Thread Flavio Percoco

On 28/10/14 22:18 +, Jesse Cook wrote:



On 10/27/14, 6:08 PM, Jay Pipes jaypi...@gmail.com wrote:


On 10/27/2014 06:18 PM, Jesse Cook wrote:

In the glance mini-summit there was a request for some documentation on
the architecture ideas I was discussing relating to: 1) removing data
consistency as a concern for glance 2) bootstraping vs baking VMs

Here's a rough draft:
https://gist.github.com/CrashenX/8fc6d42ffc154ae0682b


Hi Jesse!

A few questions for you, since I wasn't at the mini-summit and I think
don't have a lot of the context necessary here...

1) In the High-Level Architecture diagram, I see Glance Middleware
components calling to a Router component. Could you elaborate what
this Router component is, in relation to what components currently exist
in Glance and Nova? For instance, is the Router kind of like the
existing Glance Registry component? Or is it something more like the
nova.image.download modules in Nova? Or something entirely different?


It's a high-level abstraction. It's close to being equivalent to the cloud
icon you find in many architecture diagrams, but not quite that vague. If
I had to associate it with an existing OpenStack component, I'd probably
say nova-scheduler. There is much detail to be flushed out here. I have
some additional thoughts and documentation that I'm working on that I will
share once it is more flushed out. Ultimately, I would like to see a fully
documented prescriptive architecture that we can iterate over to address
some of the complexities and pain points within the system as a whole.



2) The Glance Middleware. Do you mean WSGI middleware here? Or are you
referring to something more like the existing nova.image.api module that
serves as a shim over the Glance server communication?


At the risk of having something thrown at me, what I am suggesting is a
move away from Glance as a service to Glance as a purely functional API.
At some point caching would need to be discussed, but I am intentionally
neglecting caching and the existence of any data store as there is a risk
of complecting state. I want to avoid discussions on performance until
more important things can be addressed such as predictability,
reliability, scalability, consistency, maintainability, extensibility,
security, and simplicity (i.e. As defined by Rich Hickey).



Hi Jessee,

I, unfortunately, missed your presentation at the virtual mini summit
so I'm trying to catch up and to understand what you're proposing.

As far as I understand, your proposal is to hide Glance from public
access and just make it consumable by other services like Nova,
Cinder, etc through a, perhaps more robust, glance library. Did I
understand correctly?

Or, are you suggesting to get rid of glance's API entirely and instead
have it in the form of a library and everything would be handled by
the middleware you have in your diagram?

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgp1eRLkbOF_A.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
start a discussion in Paris the next week at the OpenStack Summit. Do 
you have a time slot for a very short meeting on this?

Thanks in advance.
Cheers,
Lisa


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would 
be nice to talk with you and the BLAZAR team about my proposal in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where 
the resource allocation to the user teams (i.e. the projects) can be 
done only by considering fixed quotas which cannot be exceeded even 
if there are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with 
the same problems) and how they could be managed in BLAZAR, by 
defining a new lease type (i.e. fairShare lease) to be considered as 
extension of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mistral - Real Time integration

2014-10-31 Thread Renat Akhmerov
Hi Raanan,

In addition I would say that what you described after “Secondly” is one of the 
most important reasons why Mistral exists. This is probably its strongest side 
(not sure what else can fit so well).

Just in case, here are the screencasts that can highlight Mistral capabilities 
(although not all of them):
* http://www.youtube.com/watch?v=9kaac_AfNow 
http://www.youtube.com/watch?v=9kaac_AfNow
* http://www.youtube.com/watch?v=u6ki_DIpsg0 
http://www.youtube.com/watch?v=u6ki_DIpsg0

Soon there’ll be more.

And full specification of current Mistral DSL: 
https://wiki.openstack.org/wiki/Mistral/DSLv2 
https://wiki.openstack.org/wiki/Mistral/DSLv2

Would be nice to talk to you personally if you’re going to the summit or using 
any other channel and discuss the details )

Renat Akhmerov
@ Mirantis Inc.



 On 31 Oct 2014, at 02:22, Raanan Jossefi (RJ) Benikar rbeni...@gmail.com 
 wrote:
 
 
 
 Hi,
 
 I see that Mistral can perform workflow automation and task execution per 
 scheduling but wanted to confirm if workflows can be executed in real-time 
 on-demand via REST API?   Can a REST API call into Mistral execute the 
 workflow upon request, is what I am confirming, as it seems it is the case. 
 Besides on-demand, is the purpose of Triggers currently being developed to 
 automatically detect changes or notifications and execute in response to 
 these triggers?  If so I would highly recommend triggers based on database 
 change log, on file changes , on message arrival inside an AMQP queue, and 
 on polling a REST API in which you expect a certain result, or status 
 message.
 
 Secondly, if one has a very tailored and mature cloud orchestration and 
 operations process in place and wants to migrate to OpenStack and offer 
 similar automation integrating with external change management systems, 
 performing secondary LDAP searches, performing multiple SOR / DB queries, 
 and thus interacting with other external non-cloud related technologies as 
 part of the process of creating new tenants, adding users, creating new 
 images, and creating and deploying machine instances would mistral be the 
 best mechanism for this, in terms of driving automation, and invoking 3rd 
 party APIs during and as part of the process?   
 
 My current use case looks like this:
 
 1) Tenant requests VM with specific properties (image, etc etc) -- Service 
 Now
 2) Service Now --- API ( based on properties/inputs ) query a DB to 
 automatically generate a server name )
 3) API --- OpenStack API to provision new VM.
 
 
 This is just an example, during this exchange, the API will interact with 
 several external systems besides a DB, and do more than just autogenerate 
 custom and unique machine names. In this case, the API could be Mistral that 
 I am calling via REST.  I would create API wrappers to the other components 
 ( which for the most part, I already have) and Mistral would now call both 
 wrapper APIs and OpenStack APIs to provision a machine via Nova with all of 
 the dependencies met.
 
 Your advice is kindly appreciated,
 Raanan 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Sylvain Bauza


Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
start a discussion in Paris the next week at the OpenStack Summit. Do 
you have a time slot for a very short meeting on this?

Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the scheduler, so 
I can't hardly dedicate time for Blazar. That said, I would appreciate 
if you could propose some draft implementation attached to the 
blueprint, so I could glance at it and see what you aim to deliver.


Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would 
be nice to talk with you and the BLAZAR team about my proposal in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where 
the resource allocation to the user teams (i.e. the projects) can 
be done only by considering fixed quotas which cannot be exceeded 
even if there are unused resources (but) assigned to different 
projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with 
the same problems) and how they could be managed in BLAZAR, by 
defining a new lease type (i.e. fairShare lease) to be considered 
as extension of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Denis Makogon
Hello, Stackers/Trovers.



I’d like to start discussion about how do we use guestagent API that will
eventually be evaluated as a spec. For most of you who well-known with
Trove’s codebase knows how do Trove acts when provisioning new instance.

I’d like to point out next moments:

   1.

   When we provision new instance we expect that guest will create its
   topic/queue for RPC messaging needs.
   2.

   Taskmanager doesn’t validate that guest is really up before sending
   ‘prepare’ call.

And here comes the problem, what if guest wasn’t able to start properly and
consume ‘prepare’ message due to certain circumstances? In this case
‘prepare’ message would never be consumed.


 Me and Sergey Gotliv were looking for proper solution for this case. And
we end up with next requirements for provisioning workflow:

   1.

   We must be sure that ‘prepare’ message will be consumed by guest.
   2.

   Taskmanager should handle topic/queue management for guest.
   3.

   Guest just need to consume income messages for already existing
   topic/queue.

 As concrete proposal (or at least topic for discussions) i’d like to
discuss next improvements:

We need to add new guest RPC API that will represent “ping-pong” action. So
before sending any cast- or call-type messages we need to make sure that
guest is really running.


Pros/Cons for such solution:

   1.

   Guest will do only consuming.
   2.

   Guest would not manage its topics/queues.
   3.

   We’ll be 100% sure that no messages would be lost.
   4.

   Fast-fail during provisioning.
   5.

   Other minor/major improvements.



Thoughts?


P.S.: I’d like to discuss this topic during upcoming Paris summit (during
contribution meetup at Friday).



Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Provider Router topology

2014-10-31 Thread Jaume Devesa
Hi all,

in Midokura we are working on a blueprint to define a new kind of topology
for floating ranges, as alternative to external network topology. It is
based on the idea of a Provider Router that links directly with Tenant
Routers using /30 networks. It aims to be more pure-floating, helping the
deployment of floating ranges across different physical L2 Networks. (We
know there is some interest in this[1]) We think that also it might help in
add Firewall specific policies at the edge of the cloud and define
per-tenant or per-network QoS definitions.

Document[2] is still WIP, with high level ideas and just some API details.
But we would love to hear Neutron community feedback to go further in a
steady way. In the implementation level, we are concerned in being
compatible with current DVR and don't add a new SPOF in Neutron OVS plugin.
So DVR developers feedback would be highly appreciated.

We are also interested in chat about it during the summit, maybe during the
Kilo L3 refractor BoF? [3]

Thanks in advance!

PD: Blueprint is here[4]

[1]: https://blueprints.launchpad.net/neutron/+spec/pluggable-ext-net
[2]:
https://docs.google.com/a/midokura.com/document/d/1fUPhpBWpiUvBe_c55lkokDIls--4dKVSFmGVtjxjg0w/edit#
[3]: https://etherpad.openstack.org/p/kilo-l3-refactoring
[4]: https://blueprints.launchpad.net/neutron/+spec/neutron-provider-router


-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Autoscaling][HA][Murano] Use cases for Murano actions feature. Autoscaling, HA and operations.

2014-10-31 Thread Georgy Okrokvertskhov
Hi,

In the Juno release Murano team added a new feature - Actions. This feature
allows to declare actions as specific application methods which should be
executed when an action is triggered. When Murano deploys an application
with actions new web hooks will be created and exposed by Murano API. These
web hooks could be used by any 3rd party tools like monitoring systems,
schedulers, UI tools to execute a related action at any time.

There are several use cases which could require Murano actions:

   1. AutoScaling.

There were couple discussions around AutoScaling implementation. The most
well known solution is to use Heat AS resource. Heat autoscaling resource
works perfectly with scaling other Heat resources and even Heat nested
stacks. At the same time there are discussions about AS place in Heat and
how it should be done. In Murano autoscaling is just a specific action
which knows how to modify Heat stack in order to scale the application
properly. It can just add a new instance or to do something more complex by
scaling multiple resources simultaneously based on some calculations.
Application author has a full control on what are specific steps to perform
autoscaling for this particular application.


   1. HA


HA is another great example of a use case for a Murano actions feature. As
Murano allows to bind different applications based on requirements, it is
possible to select specific Monitoring tool for the application and
configure this monitoring to call Murano API web hook generated for the
application in case of failure. As all applications are different they
usually require different steps performed for HA switchover. Murano actions
allows to define different HA tactics like try to restart a service first,
then reboot a VM, then evacuate a VM or recreate a VM.


   1. Application operations


Applications may expose different actions to perform specific operations.
Database applications can expose ‘Backup” actions to perform DB backups.
Some other application can expose action “Restart” or “Update” to perform
related operations.


Here is a link for a small recordings:

http://www.youtube.com/watch?v=OvPpJd0EOFw


Here is a code for the Java application with autoscaling actions:
https://github.com/gokrokvertskhov/murano-app-incubator/blob/monitoring-latest/io.murano.apps.java.HelloWorldCluster/Classes/HelloWorldCluster.murano#L92-L99


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-31 Thread A, Keshava
Hi,
Agents upgrade support will be common requirements which we needs to address on 
priority.

Regards,
Keshava

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Wednesday, October 29, 2014 8:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? 
why and how avoid?

On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:


 Sent from my iPad

 On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:

 I find our current design is remove all flows then add flow by 
 entry, this will cause every network node will break off all 
 tunnels between other network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup 
 which would have it skip reprogramming flows. This could be used for 
 the upgrade case.

 I hit the same issue last week and filed a bug here:
 https://bugs.launchpad.net/neutron/+bug/1383674

 From an operators perspective this is VERY annoying since you also cannot 
 push any config changes that requires/triggers a restart of the agent.
 e.g. something simple like changing a log setting becomes a hassle.
 I would prefer the default behaviour to be to not clear the flows or at the 
 least an config option to disable it.


 +1, we also suffered from this even when a very little patch is done

I'd really like to get some input from the tripleo folks, because they were the 
ones who filed the original bug here and were hit by the agent NOT 
reprogramming flows on agent restart. It does seem fairly obvious that adding 
an option around this would be a good way forward, however.

Thanks,
Kyle


 Cheers,
 Robert van Leeuwen
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-31 Thread Sullivan, Jon Paul
 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: 28 October 2014 18:34
 To: openstack-dev
 Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on
 Devstack
 
 Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
  On 10/28/2014 06:18 AM, Steven Hardy wrote:
   On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
   On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
   On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
   So this should work and I think its generally good.
  
   But - I'm curious, you only need a single image for devtest to
   experiment with tuskar - the seed - which should be about the
   same speed (or faster, if you have hot caches) than devstack, and
   you'll get Ironic and nodes registered so that the panels have
 stuff to show.
  
   TBH it's not so much about speed (although, for me, devstack is
   faster as I've not yet mirrored all-the-things locally, I only
   have a squid cache), it's about establishing a productive
 test/debug/hack/re-test workflow.
  
   mm, squid-cache should still give pretty good results. If its not,
   bug time :). That said..
  
   I've been configuring devstack to create Ironic nodes FWIW, so
   that works OK too.
  
   Cool.
  
   It's entirely possible I'm missing some key information on how to
   compose my images to be debug friendly, but here's my devtest
 frustration:
  
   1. Run devtest to create seed + overcloud
  
   If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
   devtest_seed.sh only. The seed has everything on it, so the rest is
   waste (unless you need all the overcloud bits - in which case I'd
   still tune things - e.g. I'd degrade to single node, and I'd
   iterate on devtest_overcloud.sh, *not* on the full plumbing each
 time).
  
   Yup, I went round a few iterations of those, e.g running
   devtest_overcloud with -c so I could more quickly re-deploy, until I
   realized I could drive heat directly, so I started doing that :)
  
   Most of my investigations atm are around investigating Heat issues,
   or testing new tripleo-heat-templates stuff, so I do need to spin up
   the overcloud (and update it, which is where the fun really began
   ref bug
   #1383709 and #1384750 ...)
  
   2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
   3. Log onto seed VM to debug the issue.  Discover there are no
 logs.
  
   We should fix that - is there a bug open? Thats a fairly serious
   issue for debugging a deployment.
  
   I've not yet raised one, as I wasn't sure if it was either by
   design, or if I was missing some crucial element from my DiB config.
  
   If you consider it a bug, I'll raise one and look into a fix.
  
   4. Restart the heat-engine logging somewhere 5. Realize
   heat-engine isn't quite latest master 6. Git pull heat, discover
   networking won't allow it
  
   Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
   totally fine - I've depended heavily on that to debug various
   things over time.
  
   Not yet dug into it in a lot of detail tbh, my other VMs can access
   the internet fine so it may be something simple, I'll look into it.
 
  Are you sure this is a networking thing?  When I try a git pull I get
 this:
 
  [root@localhost heat]# git pull
  fatal:
  '/home/bnemec/.cache/image-create/source-
 repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
  does not appear to be a git repository
  fatal: Could not read from remote repository.
 
  That's actually because the git repo on the seed would have come from
  the local cache during the image build.  We should probably reset the
  remote to a sane value once we're done with the cache one.
 
  Networking-wise, my Fedora seed can pull from git.o.o just fine
 though.
 
 
 I think we should actually just rip the git repos out of the images in
 production installs. What good does it do sending many MB of copies of
 the git repos around? Perhaps just record HEAD somewhere in a manifest
 and rm -r the source repos during cleanup.d.

The manifests already capture this.  For example 
/etc/dib-manifests/dib-manifest-git-seed on the seed.  The format of that file 
is as-per source-repositories file format for reuse in builds.  This means it 
has the on-disk location of the repo, the remote used, and the sha1 pulled for 
the build.

 
 But, for supporting dev/test, we could definitely leave them there and
 change the remotes back to their canonical (as far as diskimage-builder
 knows) sources.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933

Re: [openstack-dev] [Autoscaling][HA][Murano] Use cases for Murano actions feature. Autoscaling, HA and operations.

2014-10-31 Thread Steven Hardy
On Fri, Oct 31, 2014 at 03:23:20AM -0700, Georgy Okrokvertskhov wrote:
Hi,
 
In the Juno release Murano team added a new feature - Actions. This
feature allows to declare actions as specific application methods which
should be executed when an action is triggered. When Murano deploys an
application with actions new web hooks will be created and exposed by
Murano API.

Can you provide links to any documentation which describes the auth scheme
used for the web hooks please?

I'm interested to see how you've approached it, vs AWS pre-signed URL,
Swift TempURL's etc, as Heat needs an openstack-native solution to this
problem as well.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Topics for the Board/TC joint meeting in Paris

2014-10-31 Thread Thierry Carrez
Eoghan Glynn wrote:
 This is already on the agenda proposed by the board (as well as a quick
 presentation on the need for structural reform in the ways we handle
 projects in OpenStack).
 
 Would it be possible for the slidedeck and a quick summary of that
 presentation to be posted to the os-dev list after the Board/TC joint
 meeting on Sunday?
 
 (Given that discussion will be highly relevant to the cross-project
 sessions on Growth Challenges running on the Tuesday)

It's not really a presentation (no slides), but I'll cover the same
summary of the current issues (define the problem we are trying to
solve) as an intro to Growth Challenges.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Finalizing cross-project design summit track

2014-10-31 Thread Thierry Carrez
Russell Bryant wrote:
 On 10/30/2014 12:12 PM, Monty Taylor wrote:
 On 10/30/2014 04:53 PM, Joe Gordon wrote:
 On Thu, Oct 30, 2014 at 4:01 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Jay Pipes wrote:
 On 10/29/2014 09:07 PM, Russell Bryant wrote:
 On 10/29/2014 06:46 PM, Rochelle Grober wrote:
 Any chance we could use the opening to move either the Refstack
 session or the logging session from their current joint (and
 conflicting) time (15:40)?  QA really would be appreciated at both.
 And I'd really like to be at both.  I'd say the Refstack one would go
 better in the debug slot, as the API stuff is sort of related to the
 logging.  Switching with one of the 14:50 sessions might also work.

 Just hoping.  I really want great participation at all of these
 sessions.

 The gate debugging session is most likely going to be dropped at this
 point.  I don't see a big problem with moving the refstack one to that
 slot (the first time).

 Anyone else have a strong opinion on this?

 Sounds good to me.

 Sounds good.


 With the gate debugging session being  dropped due to being the wrong
 format to be productive, we now need a new session. After looking over the
 etherpad of proposed cross project sessions I think there is one glaring
 omission: the SDK. In the Kilo Cycle Goals Exercise thread [0] having a
 real SDK was one of the top answers. Many folks had great responses that
 clearly explained the issues end users are having [1].  As for who could
 lead a session like this I have two ideas: Monty Taylor, who had one of the
 most colorful explanations to why this is so critical, or Dean Troyer, one
 of the few people actually working on this right now. I think it would be
 embarrassing if we had no cross project session on SDKs, since there
 appears to be a consensus that the making life easier for the end user is a
 high priority.

 The current catch is, the free slot is now at 15:40, so it would compete
 with 'How to Tackle Technical Debt in Kilo,' a session which I expect to be
 very popular with the same people who would be interested in attending a
 SDK session.

 I'm happy to lead this, to co-lead with Dean or to just watch Dean lead
 it - although I can promise in any format to start off the time period
 with some very colorful ranting. I think I'm less necessary in the tech
 debt session, as other than yes, please get rid of it I probably don't
 have too much more input that will be helpful.
 
 OK, awesome!  I'll put you down for now.
 
 I could use a proposed session description to put on sched.org.
 Otherwise, I'll just make something up.

+1 on the topic. Thanks Joe for completing the circle we started with
the TC candidates priorities !

I'd make it about the End user experience in general.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Image properties for deleted images

2014-10-31 Thread George Shuklin

Hello.

I found that glance do not provide any meta information for deleted 
images, but hide them somewhere inside.


glance image-create - #1
glance image-update #1 --property foo=bar
#1 now has foo=bar
nova start ... #1 - instance use image with foo=bar
glance image-delete #1
... and now we have instance from image without property foo=bar
#1 now say 'deleted' and there is no 'foo=bar'
nova image-create  - #2
glance image-show #2 - image has foo=bar

Is any way to know what properties has deleted image?

Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Design Summit schedule

2014-10-31 Thread Thierry Carrez
Thierry Carrez wrote:
 We still expect a number of changes (TripleO session schedule is still
 tbd, we might drop the gate debugging cross-project workshop...). If
 anyone pushes a change to the live schedule at this point, it would be
 nice to post a corresponding public service announcement to this thread
 to give everyone a heads-up.

A quick heads-up on recent changes I noticed:

In the Cross-project workshops:
- We won't have a Gate debugging session anymore
- DefCore, RefStack, Interop. and Tempest is now Tuesday at 11:15
- We'll have a session on End user experience  SDKs Tuesday at 15:40

In the Nova track:
- Nova Objects Status and Deadlines is now Wednesday at 11:00
- Nova Functional Testing is now Wednesday at 16:30

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Thierry Carrez
Vishvananda Ishaya wrote:
 Another option:
 
 3) People consider the lower choices on their list to be equivalent. I 
 personally tend to vote in tiers (these 3 are top choices, these 3 are 
 secondary choices, these 6 are third choices) and I don’t differentiate 
 individuals in the bottom tier so it ends up unranked.

That's a bit how I vote too. I have more than 3 tiers, but I rank a few
people at the same level. And I count myself as fully understanding how
Condorcet works.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Writing a cinder volume driver

2014-10-31 Thread Darshan Ghumare
Thank you so much Eduard :).


On Thu, Oct 30, 2014 at 5:20 PM, Eduard Matei 
eduard.ma...@cloudfounders.com wrote:

 Hi Darshan,
 Having just finished writing a volume driver i can say you need a lot of
 patience.
 First, to quickly answer your questions:
 1. Read ALL the drivers in the official repo: (
 https://github.com/openstack/cinder/tree/master/cinder/volume/drivers)
 and how they relate to the cinder-api (
 https://github.com/openstack/cinder/tree/master/cinder/api); then look
 into (https://wiki.openstack.org/wiki/Cinder), especially the part about
 plugins and configuring devstack to user your driver and backend);
 2. As far as i could tell, python is the only way.
 3. You should try devstack (it's easier to setup, quicker, and always
 gives you latest code so you can develop against the latest version).

 After that, the rest is just bureaucracy :) (become a contributor, sign
 up for some services, get your code reviewed on gerrit, etc).

 Hope this helps,

 Eduard

 On Thu, Oct 30, 2014 at 1:37 PM, Darshan Ghumare 
 darshan.ghum...@gmail.com wrote:

 Hi All,

 I need to write a volume driver so that I can integrate our storage
 product into openstack.
 I have following questions about the sane,
 1. How should I go about it?
 2. I donnot know python. Is the python only way to write a driver?
 3. I have setup openstack by following  steps mentioned at
 http://docs.openstack.org/icehouse/install-guide/install/apt/content/.
 To test the drive do I also need to have a development environment (
 http://docs.openstack.org/developer/cinder/devref/development.environment.html
 )?

 Thanks,
 Darshan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Darshan®
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Writing a cinder volume driver

2014-10-31 Thread Darshan Ghumare
Thank you so much  Duncan.


On Thu, Oct 30, 2014 at 7:12 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 All excellent advice from Eduard. To confirm:
 - You will definitely need to write your driver in python.
 - Devstack is the recommended environment for development
 - Please look at the third party CI requirements for cinder drivers -
 these are an ongoing commitment

 The IRC channel #openstack-cinder on irc.freenode.net is the easiest
 way to chat to cinder developers realtime, please feel free to join us
 there.

 Welcome to the cinder community.


 --
 Duncan Thomas







 On 30 October 2014 11:50, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:
  Hi Darshan,
  Having just finished writing a volume driver i can say you need a lot of
  patience.
  First, to quickly answer your questions:
  1. Read ALL the drivers in the official repo:
  (https://github.com/openstack/cinder/tree/master/cinder/volume/drivers)
 and
  how they relate to the cinder-api
  (https://github.com/openstack/cinder/tree/master/cinder/api); then look
 into
  (https://wiki.openstack.org/wiki/Cinder), especially the part about
 plugins
  and configuring devstack to user your driver and backend);
  2. As far as i could tell, python is the only way.
  3. You should try devstack (it's easier to setup, quicker, and always
 gives
  you latest code so you can develop against the latest version).
 
  After that, the rest is just bureaucracy :) (become a contributor,
 sign up
  for some services, get your code reviewed on gerrit, etc).
 
  Hope this helps,
 
  Eduard
 
  On Thu, Oct 30, 2014 at 1:37 PM, Darshan Ghumare 
 darshan.ghum...@gmail.com
  wrote:
 
  Hi All,
 
  I need to write a volume driver so that I can integrate our storage
  product into openstack.
  I have following questions about the sane,
  1. How should I go about it?
  2. I donnot know python. Is the python only way to write a driver?
  3. I have setup openstack by following  steps mentioned at
  http://docs.openstack.org/icehouse/install-guide/install/apt/content/.
 To
  test the drive do I also need to have a development environment
  (
 http://docs.openstack.org/developer/cinder/devref/development.environment.html
 )?
 
  Thanks,
  Darshan
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Eduard Biceri Matei, Senior Software Developer
  www.cloudfounders.com
   | eduard.ma...@cloudfounders.com
 
 
 
  CloudFounders, The Private Cloud Software Company
 
  Disclaimer:
  This email and any files transmitted with it are confidential and
 intended
  solely for the use of the individual or entity to whom they are
 addressed.
  If you are not the named addressee or an employee or agent responsible
 for
  delivering this message to the named addressee, you are hereby notified
 that
  you are not authorized to read, print, retain, copy or disseminate this
  message or any part of it. If you have received this email in error we
  request you to notify us by reply e-mail and to delete all electronic
 files
  of the message. If you are not the intended recipient you are notified
 that
  disclosing, copying, distributing or taking any action in reliance on the
  contents of this information is strictly prohibited.
  E-mail transmission cannot be guaranteed to be secure or error free as
  information could be intercepted, corrupted, lost, destroyed, arrive
 late or
  incomplete, or contain viruses. The sender therefore does not accept
  liability for any errors or omissions in the content of this message, and
  shall have no liability for any loss or damage suffered by the user,
 which
  arise as a result of e-mail transmission.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Darshan®
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Hi Sylvain,

thanks for your answer.
Actually we haven't yet developed that because we'd like to be sure that 
our proposal is fine with BLAZAR.
We already implemented a pluggable advanced scheduler for Nova which 
addresses the issues we are experiencing with OpenStack in the Italian 
National Institute for Nuclear Physics. This scheduler named 
FairShareScheduler is able to make OpenStack more efficient and flexible 
in terms of resource usage. Of course we wish to integrate our work in 
OpenStack and so we tried several times to start a discussion and a 
possible interaction with the OpenStack developers, but it seems to be 
so difficult to do it.
The GANTT people suggested us to refer to BLAZAR because it may have 
more affinity with our scope. Is it so? Therefore, I would appreciate to 
know if you may be interested in our proposal.


Thanks for your attention.
Cheers,
Lisa


  Such component's name is FairShareScheduler and


On 31/10/2014 10:08, Sylvain Bauza wrote:


Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
start a discussion in Paris the next week at the OpenStack Summit. Do 
you have a time slot for a very short meeting on this?

Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the scheduler, 
so I can't hardly dedicate time for Blazar. That said, I would 
appreciate if you could propose some draft implementation attached to 
the blueprint, so I could glance at it and see what you aim to deliver.


Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it 
would be nice to talk with you and the BLAZAR team about my proposal 
in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in 
the resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where 
the resource allocation to the user teams (i.e. the projects) can 
be done only by considering fixed quotas which cannot be exceeded 
even if there are unused resources (but) assigned to different 
projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with 
the same problems) and how they could be managed in BLAZAR, by 
defining a new lease type (i.e. fairShare lease) to be considered 
as extension of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the 
main concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Russell Bryant
On 10/30/2014 09:15 PM, Adam Lawson wrote:
 I was thinking after reading all this; besides modifying the number of
 required patches, perhaps we could try a blind election; candidate names
 are removed so ballots have to be cast based on the merit of each
 candidate's responses to the questions and/or ideas - which I think
 effectively eliminates the possibility of partisan voting based name
 recognition or based on the fact they are a well-known as PTL for a
 specific project i.e. nothing to do with TC but their prominence within
 the development hierarchy.
 
 Or something along those lines. If we aren't electing names, might as
 well cast ballots that eliminates them form the equation. ; ) Might be
 another 'when hell freezes over' suggestion but I thought I'd at least
 throw it out there for discussion.

I actually hope *nobody* votes purely on a candidacy email.  I would
hate to see someone get elected who was just able to write a really nice
email and does not otherwise participate regularly in the development
community.  I find someone's reputation based on the results they
produce from ongoing involvement in the project even more important.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Jay Pipes

On 10/31/2014 08:00 AM, Thierry Carrez wrote:

Vishvananda Ishaya wrote:

Another option:

3) People consider the lower choices on their list to be
equivalent. I personally tend to vote in tiers (these 3 are top
choices, these 3 are secondary choices, these 6 are third choices)
and I don’t differentiate individuals in the bottom tier so it ends
up unranked.


That's a bit how I vote too. I have more than 3 tiers, but I rank a
few people at the same level. And I count myself as fully
understanding how Condorcet works.


Same here (as Vishy). Typically three tiers plus equivs: 1 votes, 2 
votes, 3 votes and all rest 12s.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Nikolay Starodubtsev
Hi Lisa, Sylvain,
I'll take a look at blueprint next week and will try to left some feedback
about it.
Stay tuned.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2014-10-31 16:14 GMT+04:00 Lisa lisa.zangra...@pd.infn.it:

  Hi Sylvain,

 thanks for your answer.
 Actually we haven't yet developed that because we'd like to be sure that
 our proposal is fine with BLAZAR.
 We already implemented a pluggable advanced scheduler for Nova which
 addresses the issues we are experiencing with OpenStack in the Italian
 National Institute for Nuclear Physics. This scheduler named
 FairShareScheduler is able to make OpenStack more efficient and flexible in
 terms of resource usage. Of course we wish to integrate our work in
 OpenStack and so we tried several times to start a discussion and a
 possible interaction with the OpenStack developers, but it seems to be so
 difficult to do it.
 The GANTT people suggested us to refer to BLAZAR because it may have more
 affinity with our scope. Is it so? Therefore, I would appreciate to know if
 you may be interested in our proposal.

 Thanks for your attention.
 Cheers,
 Lisa


   Such component's name is FairShareScheduler and


 On 31/10/2014 10:08, Sylvain Bauza wrote:


 Le 31/10/2014 09:46, Lisa a écrit :

 Dear Sylvain and BLAZAR team,

 I'd like to receive your feedback on our blueprint (
 https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and start
 a discussion in Paris the next week at the OpenStack Summit. Do you have a
 time slot for a very short meeting on this?
 Thanks in advance.
 Cheers,
 Lisa


 Hi Lisa,

 At the moment, I'm quite occupied on Nova to split out the scheduler, so I
 can't hardly dedicate time for Blazar. That said, I would appreciate if you
 could propose some draft implementation attached to the blueprint, so I
 could glance at it and see what you aim to deliver.

 Thanks,
 -Sylvain


 On 28/10/2014 12:07, Lisa wrote:

 Dear Sylvain,

 as you suggested me few weeks ago, I created the blueprint (
 https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and I'd
 like to start a discussion.
 I will be in Paris the next week at the OpenStack Summit, so it would be
 nice to talk with you and the BLAZAR team about my proposal in person.
 What do you think?

 thanks in advance,
 Cheers,
 Lisa


 On 18/09/2014 16:00, Sylvain Bauza wrote:


 Le 18/09/2014 15:27, Lisa a écrit :

 Hi all,

 my name is Lisa Zangrando and I work at the Italian National Institute for
 Nuclear Physics (INFN). In particular I am leading a team which is
 addressing the issue concerning the efficiency in the resource usage in
 OpenStack.
 Currently OpenStack allows just a static partitioning model where the
 resource allocation to the user teams (i.e. the projects) can be done only
 by considering fixed quotas which cannot be exceeded even if there are
 unused resources (but) assigned to different projects.
 We studied the available BLAZAR's documentation and, in agreement with Tim
 Bell (who is responsible the OpenStack cloud project at CERN), we think
 this issue could be addressed within your framework.
 Please find attached a document that describes our use cases (actually we
 think that many other environments have to deal with the same problems) and
 how they could be managed in BLAZAR, by defining a new lease type (i.e.
 fairShare lease) to be considered as extension of the list of the already
 supported lease types.
 I would then be happy to discuss these ideas with you.

 Thanks in advance,
 Lisa


 Hi Lisa,

 Glad to see you're interested in Blazar.

 I tried to go thru your proposal, but could you please post the main
 concepts of what you plan to add into an etherpad and create a blueprint
 [1] mapped to it so we could discuss on the implementation ?
 Of course, don't hesitate to ping me or the blazar community in
 #openstack-blazar if you need help with the process or the current Blazar
 design.

 Thanks,
 -Sylvain

 [1] https://blueprints.launchpad.net/blazar/



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Erik Moe


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage. Also some implementations might not 
be able to take VID into account when doing mac address learning, forcing at 
least unique macs on a trunk network.

Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is that it 
might be beneficial to have all solutions.

Platforms that have issues forking of VLANs at VM port level could get around 
with trunk network + L2GW but having more hacks if integration with other parts 
of Neutron is needed. Platforms that have issues implementing trunk networks 
could get around using “VLAN aware VMs” but being forced to separately manage 
every VLAN as a Neutron network. On platforms that have both, user can select 
method depending on what is needed.

Thanks,
Erik



From: Armando M. [mailto:arma...@gmail.com]
Sent: den 28 oktober 2014 19:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Sorry for jumping into this thread late...there's lots of details to process, 
and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward, at 
the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I think 
that is sensible to adopt the latest spec system we have been using to 
understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging 
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor 
specific blueprint for now.

When I look at these I clearly see that we jump all the way to implementations 
details. From an architectural point of view, this clearly does not make a lot 
of sense.

In order to ensure that everyone is on the same page, I would suggest to have a 
discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible interactions 
that an actor (i.e. the tenant or the admin) can have with the system (an 
OpenStack deployment), when these NFV-enabling capabilities are available? What 
are the observed outcomes once these interactions have taken place?

-  Management API: what abstractions do we expose to the tenant or admin (do we 
augment the existing resources, or do we create new resources, or do we do 
both)? This should obviously driven by a set of use cases, and we need to 
identify the minimum set or logical artifacts that would let us meet the needs 
of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if 
anything, so that we can implement this NFV-enabling constructs successfully? 
Are there any changes to the core L2 API? Are there any changes required to the 
core framework (scheduling, policy, notifications, data model etc)?

- Add support to the existing plugin backends: the openvswitch reference 
implementation is an obvious candidate, but other plugins may want to leverage 
the newly defined capabilities too. Once the above mentioned points have been 
fleshed out, it should be fairly straightforward to have these efforts progress 
in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't 
believe like the core team is in the best position to determine the best 
approach forward; I think it's in 

[openstack-dev] [oslo.db] Marker based paging

2014-10-31 Thread Heald, Mike
Hi all,

I'm implementing paging on storyboard, and I wanted to ask why we decided to 
use marker based paging. I have some opinions on this, but I want to keep my 
mouth shut until I find out what problem it was solving :)

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-31 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 8:16 PM, Kyle Mestery mest...@mestery.com wrote:
 On Thu, Oct 23, 2014 at 3:22 PM, Kyle Mestery mest...@mestery.com wrote:
 As discussed during the neutron-drivers meeting this week [1], we've
 going to use one of the Neutron 40 minute design summit slots for
 lightning talks. The basic idea is we will have 6 lightning talks,
 each 5 minutes long. We will force a 5 minute hard limit here. We'll
 do the lightning talk round first thing Thursday morning.

 To submit a lightning talk, please add it to the etherpad linked here
 [2]. I'll be collecting ideas until after the Neutron meeting on
 Monday, 10-27-2014. At that point, I'll take all the ideas and add
 them into a Survey Monkey form and we'll vote for which talks people
 want to see. The top 6 talks will get a lightning talk slot.

 I'm hoping the lightning talks allow people to discuss some ideas
 which didn't get summit time, and allow for even new contributors to
 discuss their ideas face to face with folks.

 As discussed in the weekly Neutron meeting, I've setup a Survey Monkey
 to determine which 6 talks will get a slot for the Neutron Lightning
 Talk track at the Design Summit. Please go here [1] and vote. I'll
 collect results until Thursday around 2300UTC or so, and then close
 the poll and the top 6 choices will get a 5 minute lightning talk.

Thanks to all who voted for Lightning Talks! I've updated the etherpad
[100] with the list of talks which got the most votes. I'm also
copying them here for people who don't like clicking on links:

MPLS VPN - Orchestrating inter-datacenter connectivity - [Mohammad Hanif]
IPv6 as an Openstack network infrastructure - ijw (Ian Wells)
L2GW support - abstraction and reference implementation for extending
logical networks into physical networks [Maruti Kamat]
servicevm framework(tacker project) and l3 router poc (yamahata)
Verifying Neutron at 100-node scale (Rally, iperf) - [Ilya Shakhat]
Tips on getting reviewers to block your changes - [Kevin Benton]

I'm excited to see how these work and hope it proves useful for people.

Safe travels to Paris to all!

Kyle

[100] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks

 Thanks!
 Kyle

 [1] https://www.surveymonkey.com/s/RLTPBY6

 Thanks!
 Kyle

 [1] 
 http://eavesdrop.openstack.org/meetings/neutron_drivers/2014/neutron_drivers.2014-10-22-15.02.log.html
 [2] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-10-31 Thread Ben Nemec
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
 On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:


 Sent from my iPad

 On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:

 I find our current design is remove all flows then add flow by entry, this
 will cause every network node will break off all tunnels between other
 network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.

 I hit the same issue last week and filed a bug here:
 https://bugs.launchpad.net/neutron/+bug/1383674

 From an operators perspective this is VERY annoying since you also cannot 
 push any config changes that requires/triggers a restart of the agent.
 e.g. something simple like changing a log setting becomes a hassle.
 I would prefer the default behaviour to be to not clear the flows or at the 
 least an config option to disable it.


 +1, we also suffered from this even when a very little patch is done

 I'd really like to get some input from the tripleo folks, because they
 were the ones who filed the original bug here and were hit by the
 agent NOT reprogramming flows on agent restart. It does seem fairly
 obvious that adding an option around this would be a good way forward,
 however.

Since nobody else has commented, I'll put in my two cents (though I
might be overcharging you ;-).  I've also added the TripleO tag to the
subject, although with Summit coming up I don't know if that will help.

Anyway, if the bug you're referring to is the one I think, then our
issue was just with the flows not existing.  I don't think we care
whether they get reprogrammed on agent restart or not as long as they
somehow come into existence at some point.

It's possible I'm wrong about that, and probably the best person to talk
to would be Robert Collins since I think he's the one who actually
tracked down the problem in the first place.

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mistral - Real Time integration

2014-10-31 Thread Raanan Ikar
Thank you Dimitri and Renat for your kind and swift replies.  This was very
helpful at the least, it will certainly drive or reinforce an option I am
likely to promote and consider taking for driving integration both inside
and outside of OpenStack.

I won't unfortunately be going to the OpenStack Summit this year but please
feel free to reach me via skype, my skype ID is rbenikar , and my
gtalk/gmail ID is rbenikar as well.  I am based in the East Coast, in RTP,
NC, USA.

I believe in my new project, I have begun working with a former US-based
Mirantis employee, his name Koffi Nogbe.  Small world.

Let me ask you as well, lets say I implement OpenStack out of the box,
without extending any of its own code and thus provide access to the
OpenStack API unchanged, can Mistral be interjected in such a way that
during any API request, it can play a role in executing logic outside of
OpenStack, and then update the data or input going into the request that
OpenStack now achieves the specific custom Use Case required?   This would
also be quite powerful. Imagine a Provider telling a customer, We use
OpenStack and we simply allow you customer to call its API directly. We
have not modified it, we have not extended it, we have simply banded
together, hooked in a mechanism to evaluate and then perform additional
work to complete your use case.

Example Use Cases:
1) Auto-generate machine names based on simple OS, environment, and
business or tenant-based details with ability to override this naming
convention by allowing the customer to specify their own VM ( in the event
they are migrating existing machines), and allowing CRUD DB operations as
to check, keep track, and record machine names allocated.
2) Execute approval workflows as well as generate automatic service tickets
and audit trails in systems used by customer outside of the confines of
OpenStack.
3) Integrate with customer ESB, to drive other automation requirements
outside confines of OpenStack. For example, register the machine name, its
IPAM determined IP, the primary contact of the app, the system owner, the
type of machine env [prod, dev, qa, pre-prod, d/r, ] , etc inside a SOR and
then synchronize that detail with enterprise security products which now
know what type of compliance and technical vulnerability checking will be
executed against and on this machine.
4 Perform via REST API, after machine is created, various REST interactions
to tell reporting and monitoring systems to begin agent-based and
agent-less monitoring of the machine, to begin scanning the machine via
network, to begin performing remote logon OS baseline analysis, to begin
performing ETLs of data residing in other databases into database instances
running in the cloud ( e.g. for Clinical Research, pull data from live
systems, cleanse of Patient Information, leaving only Clinical data, and
place inside a DB in a VM, and finally register BI tools and SAS/SPSS, and
other analytical platforms to begin using the local DB source, and then
load their data sets with this tool so the analyst can begin shortly after
provisioning without doing any work asking for the data, and getting proper
security clearances, etc.  Convert 6-8 week task into under 30 minutes )

I also wonder too, if Mistral could be extended to offer a 2nd type of
feature, an API proxy to OpenStack?  In this case, if this feature was
added, all requests to OpenStack could be proxied via Mistral, the REST
Operations would remained unaltered or changed.  However, one could then
enable workflow logic where it is required for any given operation.
Example, a user submits a machine creation request.  During this, a WF
invokes an external BPM suite to perform an external approval workflow, the
API proxy 's WF waits until the approval is complete, and then proceeds to
create the VM.  One could also auto-generate VM name, ... Lets say that the
standard API has required inputs that cannot be left null, with the Proxy,
one could have use cases where when left null, there is logic to
auto-generate the setting of this value based on business rules pulling
data from other systems. This Proxy and WF could also integrate with DB,
ESB, and there can be out of band WF triggers that after one WF completes
and places a message inside the ESB or in the DB, a 2ndary WF detects this,
and performs the next WF run required.  Now you have WF when you need it, a
scheduling mechanism, a proxy for direct in-band, and the trigger for
out-of-process integration.

Please feel free to contact me. It would be a pleasure to speak with you. I
first want to best understand Mistral, and will begin testing it with
DevStack in my lab.

Thanks,
RJ









On Fri, Oct 31, 2014 at 5:03 AM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 Hi Raanan,

 In addition I would say that what you described after “Secondly” is one of
 the most important reasons why Mistral exists. This is probably its
 strongest side (not sure what else can fit so well).

 Just in case, 

Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Hi Nikolay,

many thanks.
Cheers,
Lisa

On 31/10/2014 14:10, Nikolay Starodubtsev wrote:

Hi Lisa, Sylvain,
I'll take a look at blueprint next week and will try to left some 
feedback about it.

Stay tuned.

*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-10-31 16:14 GMT+04:00 Lisa lisa.zangra...@pd.infn.it 
mailto:lisa.zangra...@pd.infn.it:


Hi Sylvain,

thanks for your answer.
Actually we haven't yet developed that because we'd like to be
sure that our proposal is fine with BLAZAR.
We already implemented a pluggable advanced scheduler for Nova
which addresses the issues we are experiencing with OpenStack in
the Italian National Institute for Nuclear Physics. This scheduler
named FairShareScheduler is able to make OpenStack more efficient
and flexible in terms of resource usage. Of course we wish to
integrate our work in OpenStack and so we tried several times to
start a discussion and a possible interaction with the OpenStack
developers, but it seems to be so difficult to do it.
The GANTT people suggested us to refer to BLAZAR because it may
have more affinity with our scope. Is it so? Therefore, I would
appreciate to know if you may be interested in our proposal.

Thanks for your attention.
Cheers,
Lisa


  Such component's name is FairShareScheduler and


On 31/10/2014 10:08, Sylvain Bauza wrote:


Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease)
and start a discussion in Paris the next week at the OpenStack
Summit. Do you have a time slot for a very short meeting on this?
Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the
scheduler, so I can't hardly dedicate time for Blazar. That said,
I would appreciate if you could propose some draft implementation
attached to the blueprint, so I could glance at it and see what
you aim to deliver.

Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it
would be nice to talk with you and the BLAZAR team about my
proposal in person.
What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National
Institute for Nuclear Physics (INFN). In particular I am
leading a team which is addressing the issue concerning the
efficiency in the resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model
where the resource allocation to the user teams (i.e. the
projects) can be done only by considering fixed quotas which
cannot be exceeded even if there are unused resources (but)
assigned to different projects.
We studied the available BLAZAR's documentation and, in
agreement with Tim Bell (who is responsible the OpenStack
cloud project at CERN), we think this issue could be
addressed within your framework.
Please find attached a document that describes our use cases
(actually we think that many other environments have to deal
with the same problems) and how they could be managed in
BLAZAR, by defining a new lease type (i.e. fairShare lease)
to be considered as extension of the list of the already
supported lease types.
I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post
the main concepts of what you plan to add into an etherpad and
create a blueprint [1] mapped to it so we could discuss on the
implementation ?
Of course, don't hesitate to ping me or the blazar community
in #openstack-blazar if you need help with the process or the
current Blazar design.

Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list

Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Sean Dague
On 10/31/2014 08:29 AM, Russell Bryant wrote:
 On 10/30/2014 09:15 PM, Adam Lawson wrote:
 I was thinking after reading all this; besides modifying the number of
 required patches, perhaps we could try a blind election; candidate names
 are removed so ballots have to be cast based on the merit of each
 candidate's responses to the questions and/or ideas - which I think
 effectively eliminates the possibility of partisan voting based name
 recognition or based on the fact they are a well-known as PTL for a
 specific project i.e. nothing to do with TC but their prominence within
 the development hierarchy.

 Or something along those lines. If we aren't electing names, might as
 well cast ballots that eliminates them form the equation. ; ) Might be
 another 'when hell freezes over' suggestion but I thought I'd at least
 throw it out there for discussion.
 
 I actually hope *nobody* votes purely on a candidacy email.  I would
 hate to see someone get elected who was just able to write a really nice
 email and does not otherwise participate regularly in the development
 community.  I find someone's reputation based on the results they
 produce from ongoing involvement in the project even more important.

+1

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] MOS Infra weekly report (29 Oct 2014 - 31 Oct 2014)

2014-10-31 Thread Sergey Lukjanov
Hi colleagues,

here you can find the weekly report for MOS Infra activity -
https://mirantis.jira.com/wiki/display/MOSI/2014/10/31/MOSI+Weekly+Report%2C+Phase+%231%2C+29+Oct+2014+-+31+Oct+2014

Please, note that it includes the previous three days, previous reports
could be found in the MOSI space blog -
https://mirantis.jira.com/wiki/pages/viewrecentblogposts.action?key=MOSI

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MOS Infra weekly report (29 Oct 2014 - 31 Oct 2014)

2014-10-31 Thread Sergey Lukjanov
Heh, wrong mailing list :(

On Fri, Oct 31, 2014 at 7:02 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi colleagues,

 here you can find the weekly report for MOS Infra activity -
 https://mirantis.jira.com/wiki/display/MOSI/2014/10/31/MOSI+Weekly+Report%2C+Phase+%231%2C+29+Oct+2014+-+31+Oct+2014

 Please, note that it includes the previous three days, previous reports
 could be found in the MOSI space blog -
 https://mirantis.jira.com/wiki/pages/viewrecentblogposts.action?key=MOSI

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Matt Joyce
On one hand, I agree a member of the TC should be a very active member
of the development community.  Something I have not been, much to my shame.

However, there are obviously some fundamental issues in how the TC has been
governing OpenStack in the past few releases.  Very serious issues in the
project have been largely ignored.  Foremost in my mind, among them, is
the lack of an upgradability path.  I remember there being large discussion
and agreement to address this at folsom, and further back.  I have seen no
meaningful effort made to address a functionality requirement that has been
requested repeatedly and emphatically since as far back as austin.

I can raise other issues that continue to plague usership, such as neutron
failing to take over for nova-network now two releases after it's planned
obsolescence.  My concern, is that the TC comprised entirely of active 
developers ( most of whom are full time on the open source side of this
project ), is trapped in something of an echo chamber.  I have no real
reason to suggest this is the case, beyond the obvious failure by the 
project to address concerns that have been paramount in the eyes of users
for years now.  But, the concern lingers.  

I fear that the TC is beholden entirely to the voice of the development
community and largely ignorant of the concerns of others.  Certainly,
the incentives promote that.  The problem of course, is that the TC is
responsible for driving purogratives in development that reflect more
than the development communities desires.

-Matt



On Fri, Oct 31, 2014 at 11:25:13AM -0400, Sean Dague wrote:
 On 10/31/2014 08:29 AM, Russell Bryant wrote:
  On 10/30/2014 09:15 PM, Adam Lawson wrote:
  I was thinking after reading all this; besides modifying the number of
  required patches, perhaps we could try a blind election; candidate names
  are removed so ballots have to be cast based on the merit of each
  candidate's responses to the questions and/or ideas - which I think
  effectively eliminates the possibility of partisan voting based name
  recognition or based on the fact they are a well-known as PTL for a
  specific project i.e. nothing to do with TC but their prominence within
  the development hierarchy.
 
  Or something along those lines. If we aren't electing names, might as
  well cast ballots that eliminates them form the equation. ; ) Might be
  another 'when hell freezes over' suggestion but I thought I'd at least
  throw it out there for discussion.
  
  I actually hope *nobody* votes purely on a candidacy email.  I would
  hate to see someone get elected who was just able to write a really nice
  email and does not otherwise participate regularly in the development
  community.  I find someone's reputation based on the results they
  produce from ongoing involvement in the project even more important.
 
 +1
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] API Working Group sessions

2014-10-31 Thread Everett Toews
The schedule for the Paris sessions has been finalized and the API Working 
Group has two design summit sessions and one follow up session on Thursday! 

Part 1
Tuesday November 4, 2014 11:15 - 11:55 
Room: Manet
http://kilodesignsummit.sched.org/event/13fc460f359646dcd41d6a2d7ad0bec0

Part 2
Tuesday November 4, 2014 12:05 - 12:45 
Room: Manet
http://kilodesignsummit.sched.org/event/6dda98fe267192ed9f24aba4b7c68252

Follow up session
Thursday November 6, 2014 16:30 - 18:00 
Room: Hyatt - Vendome room (Hyatt Hotel)

Etherpad for all
https://etherpad.openstack.org/p/kilo-crossproject-api-wg

Hope to see those who are Paris bound at those sessions. We’ll need the eyes, 
ears, and keyboards of everyone in attendance to capture as much information as 
possible in the Etherpad for those who won’t be able to attend in person.

Cheers,
Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Working Group sessions

2014-10-31 Thread Everett Toews
Link to the follow up session

Thursday November 6, 2014 16:30 - 18:00 
Room: Hyatt - Vendome room (Hyatt Hotel)
http://kilodesignsummit.sched.org/event/3f0a5f22f2d641ef69965373f3e23983

Everett


On Oct 31, 2014, at 11:19 AM, Everett Toews everett.to...@rackspace.com wrote:

 The schedule for the Paris sessions has been finalized and the API Working 
 Group has two design summit sessions and one follow up session on Thursday! 
 
 Part 1
 Tuesday November 4, 2014 11:15 - 11:55 
 Room: Manet
 http://kilodesignsummit.sched.org/event/13fc460f359646dcd41d6a2d7ad0bec0
 
 Part 2
 Tuesday November 4, 2014 12:05 - 12:45 
 Room: Manet
 http://kilodesignsummit.sched.org/event/6dda98fe267192ed9f24aba4b7c68252
 
 Follow up session
 Thursday November 6, 2014 16:30 - 18:00 
 Room: Hyatt - Vendome room (Hyatt Hotel)
 
 Etherpad for all
 https://etherpad.openstack.org/p/kilo-crossproject-api-wg
 
 Hope to see those who are Paris bound at those sessions. We’ll need the eyes, 
 ears, and keyboards of everyone in attendance to capture as much information 
 as possible in the Etherpad for those who won’t be able to attend in person.
 
 Cheers,
 Everett
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Russell Bryant
On 10/31/2014 12:17 PM, Matt Joyce wrote:
 On one hand, I agree a member of the TC should be a very active member
 of the development community.  Something I have not been, much to my shame.
 
 However, there are obviously some fundamental issues in how the TC has been
 governing OpenStack in the past few releases.  Very serious issues in the
 project have been largely ignored.  Foremost in my mind, among them, is
 the lack of an upgradability path.  I remember there being large discussion
 and agreement to address this at folsom, and further back.  I have seen no
 meaningful effort made to address a functionality requirement that has been
 requested repeatedly and emphatically since as far back as austin.

I actually think there has been good progress here.

Nova, for example, has been working very hard on this for several
releases.  Icehouse was the first release where you were able to do a
rolling upgrade of your compute nodes.  This is tested by our CI system,
as well.

This continues to be a priority in Nova and across OpenStack.  Nova's
support for rolling upgrades continues to be improved.  We're also
seeing a push to apply what we've learned and implemented for Nova
across other projects.  We have a session about this in the
cross-project track.  There's a session in the Cinder track to discuss
it.  There's a related session in the Nova track.  There's a session in
the Olso track about the shared code part ... it is a priority, and very
good work is happening.

 I can raise other issues that continue to plague usership, such as neutron
 failing to take over for nova-network now two releases after it's planned
 obsolescence.  My concern, is that the TC comprised entirely of active 
 developers ( most of whom are full time on the open source side of this
 project ), is trapped in something of an echo chamber.  I have no real
 reason to suggest this is the case, beyond the obvious failure by the 
 project to address concerns that have been paramount in the eyes of users
 for years now.  But, the concern lingers.  

Again, this is an issue that has not been ignored.

In the Juno cycle, the TC did a series of project reviews, where we
identified key gaps between the project's current status and what we
expect from an integrated project.  Getting Neutron to where it can
finally deprecate and replace nova-network is the top issue for Neutron.
 We worked with the Neutron team to write up a gap analysis and
remediation plan [2].  A lot of very good progress was made in the Juno
cycle.  The Neutron team did a nice job.

I'm personally very hopeful that we can wrap up this deprecation issue
in the Kilo cycle.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html
[2]
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] new meeting channel

2014-10-31 Thread Doug Wiegley
Hi all,

In trying to find some alternate times for the Neutron LBaaS meeting, all
of the slots that fall roughly in business-ish hours for the US/Europe are
jam packed (at least on the days that aren’t risking long weekends.)  I’d
like to propose adding #openstack-meeting-4 to alleviate this.

Ttx pointed out that the number of meeting channels is a tradeoff between
keeping things spread out to avoid to many overlapping meetings, and
capacity.  Infra and ttx don’t have a problem adding a new channel, in
theory (“it’s probably fine.”)

I think we’ve reached the point that a new channel would help with
scheduling.

Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift]Questions on concurrent operations on the same object

2014-10-31 Thread jordan pittier
Hi guys,

We are currently benchmarking our Scality object server backend for Swift. We 
basically created a new DiskFile class that is used in a new ObjectController 
that inherits from the native server.ObjectController. It's pretty similar to 
how Ceph can be used as a backend for Swift objects. Our DiskFile is used to 
make HTTP request to the Scality Ring which supports GET/PUT/Delete on 
objects. 

Scality implementation is here : 
https://github.com/scality/ScalitySproxydSwift/blob/master/swift/obj/scality_sproxyd_diskfile.py

We are using SSBench to benchmark and when the concurrency is high, we see 
somehow interleaved operations on the same object. For example, our DiskFile 
will be asked to DELETE an object while the object is currently being PUT by 
another client. The Scality ring doesnt support multi writers on the same 
object. So a lot of ssbench operations fail with a HTTP response '423 - Object 
is locked'.

We dive into ssbench code and saw that it should not do interleaved operations. 
By adding some logging in our DiskFile class, we kinda of guess that the Object 
server doesn't wait for the put() method of the DiskFileWriter to finish before 
returning HTTP 200 to the Swift Proxy. Is this explanation correct ? Our put() 
method in the DiskFileWriter could take some time to complete, thus this would 
explain that the PUT on the object is being finalized while a DELETE arrives. 

Some questions :
1) Is it possible that the put() method of the DiskFileWriter is somehow non 
blocking ? (or that the result of put() is not awaited?). If not, how could 
ssbench thinks that an object is completely PUT and that ssbench is allowed to 
delete it ?
2) If someone could explain me in a few words (or more :)) how Swift deals with 
multiple writers on the same object, that will be very much appreciated.

Thanks a lot,
Jordan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Clint Byrum
Excerpts from Matt Joyce's message of 2014-10-31 09:17:23 -0700:
 On one hand, I agree a member of the TC should be a very active member
 of the development community.  Something I have not been, much to my shame.
 
 However, there are obviously some fundamental issues in how the TC has been
 governing OpenStack in the past few releases.  Very serious issues in the
 project have been largely ignored.  Foremost in my mind, among them, is
 the lack of an upgradability path.  I remember there being large discussion
 and agreement to address this at folsom, and further back.  I have seen no
 meaningful effort made to address a functionality requirement that has been
 requested repeatedly and emphatically since as far back as austin.
 

I'm not sure the TC can do this. The time is invested where those with
time to invest see fit. So if there are features or bugs that need work
from a user perspective, then I would argue the problem isn't the TC,
but a general lack of communication between users and developers. That
is not a new idea, but it is also not something the TC is really in a
position to fix.

That being said, I think the issues you're talking about are _massive_
flaws that take many releases to address. As Russell said in his response,
many of these are getting much better. This is a common problem in
development, where it looks like nothing is getting better in some areas,
because some problems are just _hard_ to solve.

 I can raise other issues that continue to plague usership, such as neutron
 failing to take over for nova-network now two releases after it's planned
 obsolescence.  My concern, is that the TC comprised entirely of active 
 developers ( most of whom are full time on the open source side of this
 project ), is trapped in something of an echo chamber.  I have no real
 reason to suggest this is the case, beyond the obvious failure by the 
 project to address concerns that have been paramount in the eyes of users
 for years now.  But, the concern lingers.  
 
 I fear that the TC is beholden entirely to the voice of the development
 community and largely ignorant of the concerns of others.  Certainly,
 the incentives promote that.  The problem of course, is that the TC is
 responsible for driving purogratives in development that reflect more
 than the development communities desires.
 

I do wonder if we should try to encourage some of our operators to join
the TC. I'm not really sure how to do that, but I'd certainly put a
seasoned operator high on the list if they stood for election and
presented a strong case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new meeting channel

2014-10-31 Thread Kyle Mestery
On Fri, Oct 31, 2014 at 12:15 PM, Doug Wiegley do...@a10networks.com wrote:
 Hi all,

 In trying to find some alternate times for the Neutron LBaaS meeting, all
 of the slots that fall roughly in business-ish hours for the US/Europe are
 jam packed (at least on the days that aren’t risking long weekends.)  I’d
 like to propose adding #openstack-meeting-4 to alleviate this.

 Ttx pointed out that the number of meeting channels is a tradeoff between
 keeping things spread out to avoid to many overlapping meetings, and
 capacity.  Infra and ttx don’t have a problem adding a new channel, in
 theory (“it’s probably fine.”)

 I think we’ve reached the point that a new channel would help with
 scheduling.

I'm +1 to this. I've also recently ran into trouble when trying to
schedule a meeting as well.

One thing I'd also like to point out is that it would be great if
people could look over any meetings they have on the schedule which
are not currently being run and clean those up. Not saying this would
alleviate the need for the new channel, but it's probably a good task
to take on every quarter at least to cleanup meetings which are no
longer active.

Thanks,
Kyle

 Thanks,
 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Tim Simpson
Hi Denis,

It seems like the issue you're trying to solve is that these 'prepare' messages 
can't be consumed by the guest.
So, if the guest never actually comes online and therefore can't consume the 
prepare call, then you'll be left with the message in the queue forever.

If you use a ping-pong message, you'll still be left with a stray message in 
the queue if it fails.

I think the best fix is if we delete the queue when deleting an instance. This 
way you'll never have more queues in rabbit than are needed.

Thanks,

Tim



From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, October 31, 2014 4:32 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, 
queues, consumption.


Hello, Stackers/Trovers.



I’d like to start discussion about how do we use guestagent API that will 
eventually be evaluated as a spec. For most of you who well-known with Trove’s 
codebase knows how do Trove acts when provisioning new instance.

I’d like to point out next moments:

  1.  When we provision new instance we expect that guest will create its 
topic/queue for RPC messaging needs.

  2.  Taskmanager doesn’t validate that guest is really up before sending 
‘prepare’ call.

And here comes the problem, what if guest wasn’t able to start properly and 
consume ‘prepare’ message due to certain circumstances? In this case ‘prepare’ 
message would never be consumed.


Me and Sergey Gotliv were looking for proper solution for this case. And we end 
up with next requirements for provisioning workflow:

  1.  We must be sure that ‘prepare’ message will be consumed by guest.

  2.  Taskmanager should handle topic/queue management for guest.

  3.  Guest just need to consume income messages for already existing 
topic/queue.

As concrete proposal (or at least topic for discussions) i’d like to discuss 
next improvements:

We need to add new guest RPC API that will represent “ping-pong” action. So 
before sending any cast- or call-type messages we need to make sure that guest 
is really running.


Pros/Cons for such solution:

  1.  Guest will do only consuming.

  2.  Guest would not manage its topics/queues.

  3.  We’ll be 100% sure that no messages would be lost.

  4.  Fast-fail during provisioning.

  5.  Other minor/major improvements.



Thoughts?


P.S.: I’d like to discuss this topic during upcoming Paris summit (during 
contribution meetup at Friday).



Best regards,

Denis Makogon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Kyle Mestery
On Fri, Oct 31, 2014 at 12:12 PM, Russell Bryant rbry...@redhat.com wrote:
 On 10/31/2014 12:17 PM, Matt Joyce wrote:
 On one hand, I agree a member of the TC should be a very active member
 of the development community.  Something I have not been, much to my shame.

 However, there are obviously some fundamental issues in how the TC has been
 governing OpenStack in the past few releases.  Very serious issues in the
 project have been largely ignored.  Foremost in my mind, among them, is
 the lack of an upgradability path.  I remember there being large discussion
 and agreement to address this at folsom, and further back.  I have seen no
 meaningful effort made to address a functionality requirement that has been
 requested repeatedly and emphatically since as far back as austin.

 I actually think there has been good progress here.

 Nova, for example, has been working very hard on this for several
 releases.  Icehouse was the first release where you were able to do a
 rolling upgrade of your compute nodes.  This is tested by our CI system,
 as well.

 This continues to be a priority in Nova and across OpenStack.  Nova's
 support for rolling upgrades continues to be improved.  We're also
 seeing a push to apply what we've learned and implemented for Nova
 across other projects.  We have a session about this in the
 cross-project track.  There's a session in the Cinder track to discuss
 it.  There's a related session in the Nova track.  There's a session in
 the Olso track about the shared code part ... it is a priority, and very
 good work is happening.

 I can raise other issues that continue to plague usership, such as neutron
 failing to take over for nova-network now two releases after it's planned
 obsolescence.  My concern, is that the TC comprised entirely of active
 developers ( most of whom are full time on the open source side of this
 project ), is trapped in something of an echo chamber.  I have no real
 reason to suggest this is the case, beyond the obvious failure by the
 project to address concerns that have been paramount in the eyes of users
 for years now.  But, the concern lingers.

 Again, this is an issue that has not been ignored.

 In the Juno cycle, the TC did a series of project reviews, where we
 identified key gaps between the project's current status and what we
 expect from an integrated project.  Getting Neutron to where it can
 finally deprecate and replace nova-network is the top issue for Neutron.
  We worked with the Neutron team to write up a gap analysis and
 remediation plan [2].  A lot of very good progress was made in the Juno
 cycle.  The Neutron team did a nice job.

 I'm personally very hopeful that we can wrap up this deprecation issue
 in the Kilo cycle.

++

The Neutron team made this a priority focus on Juno, and in Kilo we'll
wrap this up and we should be able to declare nova-network as
deprecated in Kilo. This was a large undertaking but it's been great
to have the support of many people in the community on this.

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html
 [2]
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Oct 31 2014 [trove] [sahara]

2014-10-31 Thread Anne Gentle
For those of you with this crazy creepy candy holiday, happy Halloween to
you!

Here's a rundown of this week in docs.

The install guide has been tested for the main core projects on RHEL and
Ubuntu -- trove and sahara, we need more testing for the install guide on
your projects specifically. See
https://wiki.openstack.org/wiki/JunoDocTesting for a test matrix.

The HA Guide team is seeking input for revising and updating the HA Guide,
please help them as much as you are able.

Also, an attempt to compile a docs-related Summit schedule. Your interests
may vary widely but here are some items of interest.

Monday
You'll Never Look the Same Way at Developer Support Again Room 242AB
Monday, November 3 • 11:40 - 12:20 http://sched.co/1qeOhdF
HA Guide team lunch meetup Monday, November 5  • 13:00, talk to Tushar
Katarki for location
The Application Ecosystem Working Group Degas Monday, November 3 • 14:30 -
15:10 http://sched.co/1nfHtOP
Writing Enterprise Docs with an Upstream. A Life Lesson in Giving Back.
Salle Passy Monday, November 3 • 17:10 - 17:50 http://sched.co/1qeSZbp

Tuesday
API Working Group (Part 1) Manet Tuesday, November 4 • 11:15 - 11:55
http://sched.co/1xvj
API Working Group (Part 2) Manet Tuesday, November 4 • 12:05 - 12:45
http://sched.co/1xzB
OpenStack Design Guide Panel Salle Passy Tuesday, November 4 • 14:00 -
14:40 http://sched.co/1qfpXs2
Scaling Documentation Across Projects Degas Tuesday, November 4 • 14:50 -
15:30 https://etherpad.openstack.org/p/kilo-crossproject-scaling-docs
Training team meet at doc pod (Le Meridian) Tuesday, November 4  • 13:00,
talk to Sean Roberts or Pranav Salunke or Roger Luethi

Wednesday
Not really doc program scope but still, yay docs: Infra: Infra-manual
Wednesday, November 5 • 09:50 - 10:30 http://sched.co/1CINIOZ

Thursday
Ops Summit: Hacking Documentation for Operators Hyatt - Palais Royal (Hyatt
Hotel) Thursday, November 6 • 09:00 - 10:30 http://sched.co/1v8h
API Working Group Follow up session Room: Hyatt - Vendome room (Hyatt
Hotel) Thursday November 6, 2014 16:30 - 18:00

Friday
Docs team meetup at the doc pod (Le Meridian) Friday, November 7, 2014
09:00-12:30 Let's meet and gather thoughts from the week, discuss anything
from this etherpad that remains:
https://etherpad.openstack.org/p/docstopicsparissummit

Last but not least, we have a stable/juno branch now for the
openstack-manuals repository. Thanks to everyone who made that happen this
week! Nice work.

Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]

2014-10-31 Thread Jesse Cook


On 10/31/14, 3:21 AM, Flavio Percoco fla...@redhat.com wrote:

On 28/10/14 22:18 +, Jesse Cook wrote:


On 10/27/14, 6:08 PM, Jay Pipes jaypi...@gmail.com wrote:

On 10/27/2014 06:18 PM, Jesse Cook wrote:
 In the glance mini-summit there was a request for some documentation
on
 the architecture ideas I was discussing relating to: 1) removing data
 consistency as a concern for glance 2) bootstraping vs baking VMs

 Here's a rough draft:
https://gist.github.com/CrashenX/8fc6d42ffc154ae0682b

Hi Jesse!

A few questions for you, since I wasn't at the mini-summit and I think
don't have a lot of the context necessary here...

1) In the High-Level Architecture diagram, I see Glance Middleware
components calling to a Router component. Could you elaborate what
this Router component is, in relation to what components currently exist
in Glance and Nova? For instance, is the Router kind of like the
existing Glance Registry component? Or is it something more like the
nova.image.download modules in Nova? Or something entirely different?

It's a high-level abstraction. It's close to being equivalent to the
cloud
icon you find in many architecture diagrams, but not quite that vague. If
I had to associate it with an existing OpenStack component, I'd probably
say nova-scheduler. There is much detail to be flushed out here. I have
some additional thoughts and documentation that I'm working on that I
will
share once it is more flushed out. Ultimately, I would like to see a
fully
documented prescriptive architecture that we can iterate over to address
some of the complexities and pain points within the system as a whole.


2) The Glance Middleware. Do you mean WSGI middleware here? Or are you
referring to something more like the existing nova.image.api module that
serves as a shim over the Glance server communication?

At the risk of having something thrown at me, what I am suggesting is a
move away from Glance as a service to Glance as a purely functional API.
At some point caching would need to be discussed, but I am intentionally
neglecting caching and the existence of any data store as there is a risk
of complecting state. I want to avoid discussions on performance until
more important things can be addressed such as predictability,
reliability, scalability, consistency, maintainability, extensibility,
security, and simplicity (i.e. As defined by Rich Hickey).


Hi Jessee,

I, unfortunately, missed your presentation at the virtual mini summit
so I'm trying to catch up and to understand what you're proposing.

As far as I understand, your proposal is to hide Glance from public
access and just make it consumable by other services like Nova,
Cinder, etc through a, perhaps more robust, glance library. Did I
understand correctly?

Or, are you suggesting to get rid of glance's API entirely and instead
have it in the form of a library and everything would be handled by
the middleware you have in your diagram?

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

Hi Flavio,

The diagram suggest more of the latter than the former. However, neither
is quite right, and the glance's deployment and public interface is
ancillary.

The documentation I presented is a bit rough, and I need to make updates
to it (which I plan to get to over the next week or so). However, let me
clarify a few things.

1) Ability to obtain metadata or image information should not be taken
from the user.
2) Ideally glance would be stateless. Forget performance for a minute and
imagine glance without a cache or a database.
3) Metadata will be stored in the individual objects themselves (within
the object store).
4) The most important function of glance would be to provide a consistent
API for listing and filtering of objects. I can see an implementation
where glance provides an upload, download, and delete operation. I can
also see an implementation where these operations are not within the
glance core code. In either case these operations could be composed with
the list and filter operations.

It seems that glance is currently trying to perform several major
functions, instead of focusing on a single principle function:

1) Glance as a cache
   * Should this be the principal function of glance? I would say no.
Reinventing caching and turning it into a service doesn't seem to add the
most value.
   * Note: There seems to be debate about whether it should stay
write-through or become write-around
2) Glance as a metadata service
   * Should this be the principal function of glance? Again, I would say
no. Although this is much closer to the principal customer value, I don't
think it's quite on target.
   * This inherently suggests splitting data ownership between two parts
of the system which is a recipe for complex systems. The objects own the
data (arguably the object store, but this is a little nuanced), and glance
owns the metadata.
3) Glance as a consistent API
   * Should this be the principal function of glance? I think so. An
operator can switch between 

Re: [openstack-dev] [glance] Permissions differences for glance image-create between Icehouse and Juno

2014-10-31 Thread Nikhil Komawar
 Did you test this with an admin user?
Yes, did test it with an admin user.

 may be caused by an upgrade from Icehouse - Juno.
Possibly, good point.

Thanks,
-Nikhil


From: Flavio Percoco [fla...@redhat.com]
Sent: Friday, October 31, 2014 4:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Permissions differences for glance 
image-create between Icehouse and Juno

On 31/10/14 04:57 +, Nikhil Komawar wrote:
Hi Jay,

Wanted to clarify a few things around this:

1. are you using --is_public or --is-public option?
2. are you using stable/juno branch or it is a rc(1/2/3) from ubuntu packages?

After trying out:

glance image-create --is-public=True --disk-format qcow2 --container-format 
bare --name foobar --name foobar --file 
/opt/stack/data/glance/images/5be32fc4-e063-4032-b248-516c7ab7116b

the command seems to be working on the latest devstack setup with the branch 
stable/juno used for glance.

Did you test this with an admin user?

The policy file in your paste looks fine too.

As nothing out of the ordinary seems to be wrong, hope this intuitive 
suggestion helps: the filesystem store config may be mismatched (possibly 
there are 2 options).


I haven't had the chance to test this but my guess is that Jay's issue
may be caused by an upgrade from Icehouse - Juno.

I'll hopefully be able to give this a try today.
Fla.


Thanks,
-Nikhil


From: Tom Fifield [t...@openstack.org]
Sent: Monday, October 27, 2014 9:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Permissions differences for glance 
image-create between Icehouse and Juno

Sorry, early morning!

I can confirm that in your policy.json there is:

publicize_image: role:admin,

which seems to match what's needed :)

Regards,


Tom

On 28/10/14 10:18, Jay Pipes wrote:
 Right, but as you can read below, I'm using an admin to do the operation...

 Which is why I'm curious what exactly I'm supposed to do :)

 -jay

 On 10/27/2014 09:04 PM, Tom Fifield wrote:
 This was covered in the release notes for glance, under Upgrade notes:

 https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_3

 * The ability to upload a public image is now admin-only by default. To
 continue to use the previous behaviour, edit the publicize_image flag in
 etc/policy.json to remove the role restriction.

 Regards,


 Tom

 On 28/10/14 01:22, Jay Pipes wrote:
 Hello Glancers,

 Peter and I are having issues working with a Juno Glance endpoint.
 Specifically, a glance image-create ... --is_public=True CLI command
 that *was* working in our Icehouse cloud is now failing in our Juno
 cloud with a 403 Forbidden.

 The specific command in question is:

 glance image-create --name cirros-0.3.2-x86_64 --file
 /var/tmp/cirros-0.3.2-x86_64-disk.img --disk-format qcow2
 --container-format bare --is_public=True

 If we take off the is_public=True, everything works just fine. We are
 executing the above command as a user with a user called admin having
 the role admin in a project called admin.

 We have enabled debug=True conf option in both glance-api.conf and
 glance-registry.conf, and unfortunately, there is no log output at all,
 other than spitting out the configuration option settings on daemon
 startup and a few messages like Loaded policy rules: ... which don't
 actually provide any useful information about policy *decisions* that
 are made... :(

 Any help is most appreciated. Our policy.json file is the stock one that
 comes in the Ubuntu Cloud Archive glance packages, i.e.:

 http://paste.openstack.org/show/125420/

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Working Group Session

2014-10-31 Thread Steve Gordon
Hi all,

I wanted to highlight that as mentioned in the OpenStack NFV subteam meeting 
yesterday [1] there is a Ops Summit session aiming to bring together those 
interested in the use of OpenStack to provide the infrastructure for 
communication services. This includes, but is not limited to:

- Communication service providers
- Network equipment providers
- Developers

Anyone else interested in this space is also of course welcome to join. Among 
other things we will discuss use cases and requirements, both for new 
functionality and improving existing functionality to meet the needs of this 
sector. We will also discuss ways to work more productively with the OpenStack 
community.

The session is currently scheduled to occur @ 9 AM on Thursday in the 
Batignolles room at the Hyatt Hotel near the summit venue (this hotel also 
hosts a number of other sessions). For more details see the session entry in 
the sched.org schedule [2].

Thanks,

Steve

[1] http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-10-30-16.03.html
[2] http://kilodesignsummit.sched.org/event/b3ccf1464e335b703fc126f068142792

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Cookie collision between Horizon Stacktach

2014-10-31 Thread Aaron Sahlin
I was posed this question, but am not familiar with Horizon or StackTach 
cookie management.  Anyone know what the issue might be?


Issue: Logging into one site logs you out of the other. (horizon/stacktach)

First I open horizon and notice there are two cookies: csrftoken 
(horizon) and sessionid. I log into Horizon, then open up a new tab and 
log into stacktach (same domain, different port). After logging into 
stacktach, there's another cookie created named 
beaker.session.stacktach.  I go back to the horizon dashboard and get 
logged off after clicking anything. After trying to log back in, this 
error comes up: Your Web browser doesn't appear to have cookies 
enabled. Cookies are required for logging in. I then clear the cookies 
and am able to log in, but see this error message: Forbidden (403) CSRF 
verification failed. Request aborted. I go back to the Horizon log in 
page, finally log in, go to stacktach tab and am logged out of that.


Note that stacktach is at a separate port on the controller and uses 
beaker to create the cookie session. I've read that cookies aren't 
port-speciic on the same domain name, but should still work with 
different cookie names.. I've also tried changing the paths on the 
stacktach urls, but no luck there either.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Cookie collision between Horizon Stacktach

2014-10-31 Thread Gabriel Hurley
I have no familiarity with stacktach, but it sounds like it's trampling data on 
the sessionid cookie (even if it's also setting a beaker.session.stacktach 
cookie).

Your options include running the two at different domains/subdomains (and 
specifying the subdomain as the cookie domain; that needs to be explicit), or 
you can change the Django cookie names using settings:

Session cookie name: 
https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-name
CSRF cookie name: 
https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-CSRF_COOKIE_NAME

It doesn't sound like you had a CSRF cookie problem though. It is expected 
behavior that if you clear your cookies and don't revisit the login page to get 
a new CSRF token that form POSTs will fail.

- Gabriel

-Original Message-
From: Aaron Sahlin [mailto:asah...@linux.vnet.ibm.com] 
Sent: Friday, October 31, 2014 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Horizon] Cookie collision between Horizon  Stacktach

I was posed this question, but am not familiar with Horizon or StackTach 
cookie management.  Anyone know what the issue might be?

Issue: Logging into one site logs you out of the other. (horizon/stacktach)

First I open horizon and notice there are two cookies: csrftoken
(horizon) and sessionid. I log into Horizon, then open up a new tab and log 
into stacktach (same domain, different port). After logging into stacktach, 
there's another cookie created named beaker.session.stacktach.  I go back to 
the horizon dashboard and get logged off after clicking anything. After trying 
to log back in, this error comes up: Your Web browser doesn't appear to have 
cookies enabled. Cookies are required for logging in. I then clear the cookies 
and am able to log in, but see this error message: Forbidden (403) CSRF 
verification failed. Request aborted. I go back to the Horizon log in page, 
finally log in, go to stacktach tab and am logged out of that.

Note that stacktach is at a separate port on the controller and uses beaker to 
create the cookie session. I've read that cookies aren't port-speciic on the 
same domain name, but should still work with different cookie names.. I've also 
tried changing the paths on the stacktach urls, but no luck there either.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-31 Thread Ian Wells
Maruti's talk is, in fact, so interesting that we should probably get
together and talk about this earlier in the week.  I very much want to see
virtual-physical programmatic bridging, and I know Kevin Benton is also
interested.  Arguably the MPLS VPN stuff also is similar in scope.  Can I
propose we have a meeting on cloud edge functionality?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-31 Thread Veiga, Anthony
I’ll +1 this.  I think this is going to be relevant to quite a few things, 
including NFV and routing (if you want to establish an L2 neighbor for ISIS…).  
This seems like a useful feature.
-Anthony

Maruti's talk is, in fact, so interesting that we should probably get together 
and talk about this earlier in the week.  I very much want to see 
virtual-physical programmatic bridging, and I know Kevin Benton is also 
interested.  Arguably the MPLS VPN stuff also is similar in scope.  Can I 
propose we have a meeting on cloud edge functionality?
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-31 Thread Robert Li (baoli)


On 10/28/14, 11:01 AM, Daniel P. Berrange berra...@redhat.com wrote:

On Tue, Oct 28, 2014 at 10:18:37AM -0400, Jay Pipes wrote:
 On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
 One option would be a more  CSV like syntax eg
 
 pci_passthrough_whitelist =
address=*0a:00.*,physical_network=physnet1
 pci_passthrough_whitelist = vendor_id=1137,product_id=0071
 
 But this gets confusing if we want to specifying multiple sets of data
 so might need to use semi-colons as first separator, and comma for list
 element separators
 
 pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
vendor_id=1137;product_id=0071
 
 What about this instead (with each being a MultiStrOpt, but no comma or
 semicolon delimiters needed...)?
 
 [pci_passthrough_whitelist]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1

I think this is reasonable, though do we actually support setting
the same key twice ?

As an alternative we could just append an index for each element
in the list, eg like this:

 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1
 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1

Or like this:

 [pci_passthrough]
 whitelist_count=2

 [pci_passthrough_rule.0]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*

 [pci_passthrough_rule.1]
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1

Yeah, The last format (copied in below) is a good idea (without the
section for the count) to handle list of dictionaries. I¹ve seen similar
config examples in neutron code.
[pci_passthrough_rule.0]
# Any Intel PRO/1000 F Sever Adapter
vendor_id=8086
product_id=1001
address=*
physical_network=*

[pci_passthrough_rule.1]
# Cisco VIC SR-IOV VF only on specified address and physical network
vendor_id=1137
product_id=0071
address=*:0a:00.*
physical_network=physnet1

Without direct oslo support, to implement it requires a small method that
uses oslo cfg¹s MultiConfigParser().

Now a few questions if we want to do it in Kilo:
  ‹ Do we still need to be back-ward compatible in configuring the
whitelist? If we do, then we still need to be able to handle the json
docstring.
  ‹ To support the new format in devstack, we can use meta-section in
local.conf. how would we support the old format which is still json
docstring?  Is something like this
https://review.openstack.org/#/c/123599/ acceptable?
  ‹ Do we allow old/new formats coexist in the config file? Probably not.



 Either that, or the YAML file that Sean suggested, would be my
preference...

I think it is nice to have it all in the same file, not least because it
will be easier for people supporting openstack in the field. ie in bug
reports we cna just ask for nova.conf and know we'll have all the user
config we care about in that one place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o-
http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Poll for change in weekly meeting time.

2014-10-31 Thread Nikhil Komawar
Hi all,

It was noticed in the past few meetings that the participation in the 
alternating time-slot of Thursday at 20 UTC (or called as later time slot) 
was low. With the growing interest in Glance of developers from eastern 
latitudes and their involvement in meetings, please find this email as a 
proposal to move all meetings to an earlier time-slot.

Here's a poll [0] to find what time-slots work best for everyone as well as for 
the interest to remove the alternating time-slot aspect in the schedule.

Please be empathetic in your votes, try to suggest all possible options that 
would work for you and note the changes in your timezone due to day-light 
savings ending. Please let me know if you've any more questions.

[0] http://doodle.com/nwc26k8satuyvvmz

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] APIImpact flag for specs

2014-10-31 Thread Everett Toews
Hi All,

Chris Yeoh started the use of an APIImpact flag in commit messages for specs in 
Nova. It adds a requirement for an APIImpact flag in the commit message for a 
proposed spec if it proposes changes to the REST API. This will make it much 
easier for people such as the API Working Group who want to review API changes 
across OpenStack to find and review proposed API changes.

For example, specifications with the APIImpact flag can be found with the 
following query:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

Chris also proposed a similar change to many other projects and I did the rest. 
Here’s the complete list if you’d like to review them.

Barbican: https://review.openstack.org/131617
Ceilometer: https://review.openstack.org/131618
Cinder: https://review.openstack.org/131620
Designate: https://review.openstack.org/131621
Glance: https://review.openstack.org/131622
Heat: https://review.openstack.org/132338
Ironic: https://review.openstack.org/132340
Keystone: https://review.openstack.org/132303
Neutron: https://review.openstack.org/131623
Nova: https://review.openstack.org/#/c/129757
Sahara: https://review.openstack.org/132341
Swift: https://review.openstack.org/132342
Trove: https://review.openstack.org/132346
Zaqar: https://review.openstack.org/132348

There are even more projects in stackforge that could use a similar change. If 
you know of a project in stackforge that would benefit from using an APIImapct 
flag in its specs, please propose the change and let us know here.

Thanks,
Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to add reviewers requirement to the glance-specs.

2014-10-31 Thread Nikhil Komawar
FYI, this has been made effective as of earlier this week. Please be on the 
look out for any mysteriously failing tests in your specs...

Thanks,
-Nikhil

From: Nikhil Komawar [nikhil.koma...@rackspace.com]
Sent: Tuesday, October 21, 2014 9:31 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] Proposal to add reviewers requirement to the 
glance-specs.

Hi,

I would like to propose the requirement of 2 reviewers (at least one of them 
being Glance core), for a spec in order to be approved.

This proposal is a step to better plan the development work and help the team 
prioritize on the reviews and features. The anticipation is that it will help 
commiters get regular feedback on the active feature-work being done. It will 
also help us reduce the push for any feature to be merged very late in the 
cycle that is currently keeping wormhole open for bugs.

If there are no objections, we will implement this change for all the features 
sitting in the review queue of glance-specs. The approved blueprints would be 
discussed with core reviewers to get a sense of their availability.

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-31 Thread Mohammad Hanif
Hi Ian, all,

I would very much like that. Please let me know what time works for you. I will 
be in Paris all 5 days. 

Thanks,
Hanif. 

Sent from my iPhone

 On Oct 31, 2014, at 9:07 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 
 Maruti's talk is, in fact, so interesting that we should probably get 
 together and talk about this earlier in the week.  I very much want to see 
 virtual-physical programmatic bridging, and I know Kevin Benton is also 
 interested.  Arguably the MPLS VPN stuff also is similar in scope.  Can I 
 propose we have a meeting on cloud edge functionality?
 -- 
 Ian.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-31 Thread Mohammad Hanif
Hi Kyle,

Thanks a lot for organizing this. I heard from Tom Nadeau that you might not be 
coming to Paris. Hope all is well. 

Hanif. 

Sent from my iPhone

 On Oct 31, 2014, at 3:09 PM, Kyle Mestery mest...@mestery.com wrote:
 
 On Mon, Oct 27, 2014 at 8:16 PM, Kyle Mestery mest...@mestery.com wrote:
 On Thu, Oct 23, 2014 at 3:22 PM, Kyle Mestery mest...@mestery.com wrote:
 As discussed during the neutron-drivers meeting this week [1], we've
 going to use one of the Neutron 40 minute design summit slots for
 lightning talks. The basic idea is we will have 6 lightning talks,
 each 5 minutes long. We will force a 5 minute hard limit here. We'll
 do the lightning talk round first thing Thursday morning.
 
 To submit a lightning talk, please add it to the etherpad linked here
 [2]. I'll be collecting ideas until after the Neutron meeting on
 Monday, 10-27-2014. At that point, I'll take all the ideas and add
 them into a Survey Monkey form and we'll vote for which talks people
 want to see. The top 6 talks will get a lightning talk slot.
 
 I'm hoping the lightning talks allow people to discuss some ideas
 which didn't get summit time, and allow for even new contributors to
 discuss their ideas face to face with folks.
 As discussed in the weekly Neutron meeting, I've setup a Survey Monkey
 to determine which 6 talks will get a slot for the Neutron Lightning
 Talk track at the Design Summit. Please go here [1] and vote. I'll
 collect results until Thursday around 2300UTC or so, and then close
 the poll and the top 6 choices will get a 5 minute lightning talk.
 Thanks to all who voted for Lightning Talks! I've updated the etherpad
 [100] with the list of talks which got the most votes. I'm also
 copying them here for people who don't like clicking on links:
 
 MPLS VPN - Orchestrating inter-datacenter connectivity - [Mohammad Hanif]
 IPv6 as an Openstack network infrastructure - ijw (Ian Wells)
 L2GW support - abstraction and reference implementation for extending
 logical networks into physical networks [Maruti Kamat]
 servicevm framework(tacker project) and l3 router poc (yamahata)
 Verifying Neutron at 100-node scale (Rally, iperf) - [Ilya Shakhat]
 Tips on getting reviewers to block your changes - [Kevin Benton]
 
 I'm excited to see how these work and hope it proves useful for people.
 
 Safe travels to Paris to all!
 
 Kyle
 
 [100] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks
 
 Thanks!
 Kyle
 
 [1] https://www.surveymonkey.com/s/RLTPBY6
 
 Thanks!
 Kyle
 
 [1] 
 http://eavesdrop.openstack.org/meetings/neutron_drivers/2014/neutron_drivers.2014-10-22-15.02.log.html
 [2] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Denis Makogon
On Fri, Oct 31, 2014 at 7:49 PM, Tim Simpson tim.simp...@rackspace.com
wrote:

  Hi Denis,

  It seems like the issue you're trying to solve is that these 'prepare'
 messages can't be consumed by the guest.

Not only 'prepare' call. I want to point out that each RPC API call that
uses RPC 'cast' messaging type may remain for ever inside AMPQ service.

 So, if the guest never actually comes online and therefore can't consume
 the prepare call, then you'll be left with the message in the queue
 forever.

 Yes, it still may fail, but at least we can be sure that when we want to
'cast' something to guest it's alive and the only way to check if it's
alive to use 'call' messaging type, because if guest is down for some
reasons 'call' will fail as soon as possible.


  If you use a ping-pong message, you'll still be left with a stray
 message in the queue if it fails.

 Ok, let's discuss it. What do you thing would give us confidence that
guest is really up and ready to consume?


  I think the best fix is if we delete the queue when deleting an
 instance. This way you'll never have more queues in rabbit than are needed.

 I do agree that it may work. But as for me, it'll be more safe if
taskmanager will handle topic initializing(when instance gets created) and
canceling (when instance gets deleted).


  Thanks,

  Tim



Best regards,
Denis M.

  --
 *From:* Denis Makogon [dmako...@mirantis.com]
 *Sent:* Friday, October 31, 2014 4:32 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Trove] Guest RPC API improvements. Messages,
 topics, queues, consumption.

Hello, Stackers/Trovers.



  I’d like to start discussion about how do we use guestagent API that
 will eventually be evaluated as a spec. For most of you who well-known with
 Trove’s codebase knows how do Trove acts when provisioning new instance.

 I’d like to point out next moments:

1.

When we provision new instance we expect that guest will create its
topic/queue for RPC messaging needs.
2.

Taskmanager doesn’t validate that guest is really up before sending
‘prepare’ call.

   And here comes the problem, what if guest wasn’t able to start properly
 and consume ‘prepare’ message due to certain circumstances? In this case
 ‘prepare’ message would never be consumed.


  Me and Sergey Gotliv were looking for proper solution for this case. And
 we end up with next requirements for provisioning workflow:

1.

We must be sure that ‘prepare’ message will be consumed by guest.
2.

Taskmanager should handle topic/queue management for guest.
3.

Guest just need to consume income messages for already existing
topic/queue.

   As concrete proposal (or at least topic for discussions) i’d like to
 discuss next improvements:

 We need to add new guest RPC API that will represent “ping-pong” action.
 So before sending any cast- or call-type messages we need to make sure that
 guest is really running.


  Pros/Cons for such solution:

1.

Guest will do only consuming.
2.

Guest would not manage its topics/queues.
3.

We’ll be 100% sure that no messages would be lost.
4.

Fast-fail during provisioning.
5.

Other minor/major improvements.



  Thoughts?


  P.S.: I’d like to discuss this topic during upcoming Paris summit
 (during contribution meetup at Friday).



  Best regards,

  Denis Makogon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Poll for change in weekly meeting time.

2014-10-31 Thread Louis Taylor
Thanks for bringing this up, Nikhil.

On Fri, Oct 31, 2014, Nikhil Komawar wrote:
 Here's a poll [0] to find what time-slots work best for everyone as well as
 for the interest to remove the alternating time-slot aspect in the schedule.

Could this poll be expanded with a wider range of times? I'm sure people will
be happy with some time after 17:00 UTC, but not midnight, for example.

Are all these timeslots available in #openstack-meeting*?

 Please be empathetic in your votes, try to suggest all possible options that
 would work for you and note the changes in your timezone due to day-light
 savings ending. Please let me know if you've any more questions.

I'm personally okay with the later time slot if it means some people can attend
who otherwise would not, but other commitments make that availability somewhat
sporadic.


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
On 31 October 2014 06:29, Erik Moe erik@ericsson.com wrote:





 I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk
 network + L2GW were different use cases.



 Still I get the feeling that the proposals are put up against each other.


I think we agreed they were different, or at least the light was beginning
to dawn on the differences, but Maru's point was that if we really want to
decide what specs we have we need to show use cases not just for each spec
independently, but also include use cases where e.g. two specs are required
and the third doesn't help, so as to show that *all* of them are needed.
In fact, I suggest that first we do that - here - and then we meet up one
lunchtime and attack the specs in etherpad before submitting them.  In
theory we could have them reviewed and approved by the end of the week.
(This theory may not be very realistic, but it's good to set lofty goals,
my manager tells me.)

 Here are some examples why bridging between Neutron internal networks
 using trunk network and L2GW IMO should be avoided. I am still fine with
 bridging to external networks.



 Assuming VM with trunk port wants to use floating IP on specific VLAN.
 Router has to be created on a Neutron network behind L2GW since Neutron
 router cannot handle VLANs. (Maybe not too common use case, but just to
 show what kind of issues you can get into)

 neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID

 The code to check if valid port has to be able to traverse the L2GW.
 Handing of IP addresses of VM will most likely be affected since VM port is
 connected to several broadcast domains. Alternatively new API can be
 created.


Now, this is a very good argument for 'trunk ports', yes.  It's not
actually an argument against bridging between networks.  I think the
bridging case addresses use cases (generally NFV use cases) where you're
not interested in Openstack managing addresses - often because you're
forwarding traffic rather than being an endpoint, and/or you plan on
disabling all firewalling for speed reasons, but perhaps because you wish
to statically configure an address rather than use DHCP.  The point is
that, in the absence of a need for address-aware functions, you don't
really care much about ports, and in fact configuring ports with many
addresses may simply be overhead.  Also, as you say, this doesn't address
the external bridging use case where what you're bridging to is not
necessarily in Openstack's domain of control.

 In “VLAN aware VMs” trunk port mac address has to be globally unique since
 it can be connected to any network, other ports still only has to be unique
 per network. But for L2GW all mac addresses has to be globally unique since
 they might be bridged together at a later stage.


I'm not sure that that's particularly a problem - any VM with a port will
have one globally unique MAC address.  I wonder if I'm missing the point
here, though.

Also some implementations might not be able to take VID into account when
 doing mac address learning, forcing at least unique macs on a trunk network.


If an implementation struggles with VLANs then the logical thing to do
would be not to implement them in that driver.  Which is fine: I would
expect (for instance) LB-driver networking to work for this and leave
OVS-driver networking to never work for this, because there's little point
in fixing it.


  Benefits with “VLAN aware VMs” are integration with existing Neutron
 services.

 Benefits with Trunk networks are less consumption of Neutron networks,
 less management per VLAN.


Actually, the benefit of trunk networks is:

- if I use an infrastructure where all networks are trunks, I can find out
that a network is a trunk
- if I use an infrastructure where no networks are trunks, I can find out
that a network is not a trunk
- if I use an infrastructure where trunk networks are more expensive, my
operator can price accordingly

And, again, this is all entirely independent of either VLAN-aware ports or
L2GW blocks.

 Benefits with L2GW is ease to do network stitching.

 There are other benefits with the different proposals, the point is that
 it might be beneficial to have all solutions.


I totally agree with this.

So, use cases that come to mind:

1. I want to pass VLAN-encapped traffic from VM A to VM B.  I do not know
at network setup time what VLANs I will use.
case A: I'm simulating a network with routers in.  The router config is not
under my control, so I don't know addresses or the number of VLANs in use.
(Yes, this use case exists, search for 'Cisco VIRL'.)
case B: NFV scenarios where the VNF orchestrator decides how few or many
VLANs are used, where the endpoints may or may not be addressed, and where
the addresses are selected by the VNF manager.  (For instance, every time I
add a customer to a VNF service I create another VLAN on an internal link.
The orchestrator is intelligent and selects the VLAN; telling Openstack the
details is 

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
Go  read about HSRP and VRRP.  What you propose is akin to turning off one
physical switch port and turning on another when you want to switch from an
active physical server to a standby, and this is not how it's done in
practice; instead, you connect the two VMs to the same network and let them
decide which gets the primary address.

On 28 October 2014 10:27, A, Keshava keshav...@hp.com wrote:

  Hi Alan and  Salvatore,



 Thanks for response and I also agree we need to take small steps.

 However I have below points to make.



 It is very important how the Service VM needs will be deployed w.r.t HA.

 As per current discussion, you are proposing something like below kind of
 deployment for Carrier Grade HA.

 Since there is a separate port for Standby-VM also, then the corresponding
 standby-VM interface address should be globally routable also.

 Means it may require the Standby Routing protocols to advertise its
 interface as Next-HOP for prefix it routes.

 However external world should not be aware of the standby-routing running
 in the network.









 Instead if we can think of running Standby on same stack with Passive
 port, ( as shown below)  then external world will be unaware of the
 standing Service Routing running.

 *This may be  something very basic requirement from Service-VM (NFV HA
 perspective) for Routing/MPLS/Packet processing domain. *

 *I am brining this issue now itself, because you are proposing to change
 the basic framework of packer delivering to VM’s.*

 *(Of course there may be  other mechanism of supporting redundancy,
 however it will not be as efficient as that of handing at packet level). *







 Thanks  regards,

 Keshava



 *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
 *Sent:* Tuesday, October 28, 2014 6:48 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Hi Salvatore



 Inline below.



 *From:* Salvatore Orlando [mailto:sorla...@nicira.com
 sorla...@nicira.com]
 *Sent:* October-28-14 12:37 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Keshava,



 I think the thread is not going a bit off its stated topic - which is to
 discuss the various proposed approaches to vlan trunking.

 Regarding your last post, I'm not sure I saw either spec implying that at
 the data plane level every instance attached to a trunk will be implemented
 as a different network stack.

 AKà Agree

 Also, quoting the principle earlier cited in this thread -  make the
 easy stuff easy and the hard stuff possible - I would say that unless five
 9s is a minimum requirement for a NFV application, we might start worrying
 about it once we have the bare minimum set of tools for allowing a NFV
 application over a neutron network.

 AKà five 9’s is a 100% must requirement for NFV, but lets ensure we don’t
 mix up what the underlay service needs to guarantee and what openstack
 needs to do to ensure this type of service. Would agree, we should focus
 more on having the right configuration sets for onboarding NFV which is
 what Openstack needs to ensure is exposed then what is used underneath
 guarantee the 5 9’s is a separate matter.

 I think Ian has done a good job in explaining that while both approaches
 considered here address trunking for NFV use cases, they propose
 alternative implementations which can be leveraged in different way by NFV
 applications. I do not see now a reason for which we should not allow NFV
 apps to leverage a trunk network or create port-aware VLANs (or maybe you
 can even have VLAN aware ports which tap into a trunk network?)

 AKà Agree, I think we can hammer this out once and for all in
 Paris…….this feature has been lingering too long.

 We may continue discussing the pros and cons of each approach - but to me
 it's now just a matter of choosing the best solution for exposing them at
 the API layer. At the control/data plane layer, it seems to me that trunk
 networks are pretty much straightforward. VLAN aware ports are instead a
 bit more convoluted, but not excessively complicated in my opinion.

 AKà My thinking too Salvatore, lets ensure the right elements are exposed
 at API Layer, I would also go a little further to ensure we get those
 feature sets to be supported into the Core API (another can of worms
 discussion but we need to have it).

 Salvatore





 On 28 October 2014 11:55, A, Keshava keshav...@hp.com wrote:

 Hi,

 Pl find my reply ..





 Regards,

 keshava



 *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
 *Sent:* Tuesday, October 28, 2014 3:35 PM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Hi

 Please find some additions to Ian and responses below.

 /Alan



 *From:* A, Keshava 

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
To address a point or two that Armando has raised here that weren't covered
in my other mail:

On 28 October 2014 11:00, Armando M. arma...@gmail.com wrote:

 - Core Neutron changes: what needs to happen to the core of Neutron, if
 anything, so that we can implement this NFV-enabling constructs
 successfully? Are there any changes to the core L2 API? Are there any
 changes required to the core framework (scheduling, policy, notifications,
 data model etc)?


In the L2 API, I think this involves
- adding capability flag for trunking on networks and propagating that into
ML2's drivers (for what it's worth, this needs solving anyway; MTUs need
propagation as well)
- adding the trunk ports API and somehow implementing that in ML2

The L2GW block is in fact a new service and a reference implemenation can
be made with a namepsace, independently of the L2 plugin.

- Add support to the existing plugin backends: the openvswitch reference
 implementation is an obvious candidate,


Actually, it isn't.  The LB reference implementation is the obvious
candidate.  Because of the way it's implemented, it's easiest if the OVS
implementation refuses to make trunk networks (and therefore couldn't use
an L2GW block), but that's fine: we need one reference implementation and
it doesn't have to be OVS.

OVS may still be suitable to show off a trunk port reference
implementation; trunk ports would need addressing in the L2 plugin (in that
they're VM-to-network connectivity, which falls under its responsibility).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-31 Thread Jeremy Stanley
On 2014-10-31 13:50:25 +1300 (+1300), Robert Collins wrote:
 Actually no - because of pip install, we need to keep support up until
 we can tell folk we don't support them anymore, which is after the
 last point release, not at it. If we drop support earlier, we can't
 release the support dropping versions until the support period ends,
 and that means we'd have an unreleasable trunk, which is bad.

Well, the support-dropping versions of the servers will already be
long-released before the point releases for previous stable branches
peter out... but I get what you're saying. To mitigate we should
force new releases of all dependent clients/libs immediately prior
to dropping Python 2.6 support too.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-31 Thread Jeremy Stanley
On 2014-10-31 09:24:31 +1000 (+1000), Angus Salkeld wrote:
 +1 to this, with a term limit.

Notable that the Debian TC has been discussing term limits for
months now, and since DebConf they seem to have gotten much closer
to a concrete proposal[1] in the last week or so. Could be worth
watching for ideas on how our community might attempt to implement
something similar.

[1] https://lists.debian.org/debian-vote/2014/10/msg00281.html
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Poll for change in weekly meeting time.

2014-10-31 Thread Nikhil Komawar
Thanks for voting Louis!

Finding a spot in a channel is always tricky. If we get early votes, we can 
determine our position for this [0] proposal.

[0] http://osdir.com/ml/openstack-dev/2014-10/msg02119.html

-Nikhil


From: Louis Taylor [krag...@gmail.com]
Sent: Friday, October 31, 2014 6:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Poll for change in weekly meeting time.

Thanks for bringing this up, Nikhil.

On Fri, Oct 31, 2014, Nikhil Komawar wrote:
 Here's a poll [0] to find what time-slots work best for everyone as well as
 for the interest to remove the alternating time-slot aspect in the schedule.

Could this poll be expanded with a wider range of times? I'm sure people will
be happy with some time after 17:00 UTC, but not midnight, for example.

Are all these timeslots available in #openstack-meeting*?

 Please be empathetic in your votes, try to suggest all possible options that
 would work for you and note the changes in your timezone due to day-light
 savings ending. Please let me know if you've any more questions.

I'm personally okay with the later time slot if it means some people can attend
who otherwise would not, but other commitments make that availability somewhat
sporadic.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev