[openstack-dev] [oslo] can't set rules in the common policy

2014-02-25 Thread Tian, Shuangtai
Hi, Stackers

   When I init a Enforcer class with rules, I find the rules will rewrite by 
configure policy rules,  because the policy file is modified ,
the load rules always try to load the rules from the cache or configure file 
when checks the policy in the enforce function, and
force to rewrite the rules always using the configure policy.
I think this problem also exists when we use the set_rules to set rules before 
we use the enforce to load rules in the first time.
Anyone also meets this problem, or if the way I used is wrong? I proposed a 
patch to this problem : https://review.openstack.org/#/c/72848/


Best regards,
Tian, Shuangtai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Tripleo][CI] check-tripleo outage

2014-02-25 Thread Monty Taylor
I'd like to say I think this is excellent. Not that things broke ... But that 
a) we're consuming from infra but not dying and b) we're producing a vector of 
feedback on OpenStack that we did not have as a community before.

On Feb 25, 2014 1:08 AM, Robert Collins robe...@robertcollins.net wrote:

 Today we had an outage of the tripleo test cloud :(. 

 tl;dr: 
 - we were down for 14 hours 
 - we don't know the fundamental cause 
 - infra were not inconvenienced - yaaay 
 - its all ok now. 

 Read on for more information, what little we have. 

 We don't know exactly why it happened yet, but the control plane 
 dropped off the network. Console showed node still had a correct 
 networking configuration, including openflow rules and bridges. The 
 node was arpingable, and could arping out, but could not be pinged. 
 Tcpdump showed the node sending a ping reply on it's raw ethernet 
 device, but other machines on the same LAN did not see the packet. 

 From syslog we can see 
 Feb 24 06:28:31 ci-overcloud-notcompute0-gxezgcvv4v2q kernel: 
 [1454708.543053] hpsa :06:00.0: cmd_alloc returned NULL! 
 events 

 around the time frame that the drop-off would have happened, but they 
 go back many hours before and after that. 

 After exhausting everything that came to mind we rebooted the machine, 
 which promptly spat an NMI trace into the console: 

 [1502354.552431]  [810fdf98] 
 rcu_eqs_enter_common.isra.43+0x208/0x220 
 [1502354.552491]  [810ff9ed] rcu_irq_exit+0x5d/0x90 
 [1502354.552549]  [81067670] irq_exit+0x80/0xc0 
 [1502354.552605]  [816f9605] smp_apic_timer_interrupt+0x45/0x60 
 [1502354.552665]  [816f7f9d] apic_timer_interrupt+0x6d/0x80 
 [1502354.552722]  EOI  NMI  [816e1384] ? panic+0x193/0x1d7 
 [1502354.552880]  [a02d18e5] hpwdt_pretimeout+0xe5/0xe5 [hpwdt] 
 [1502354.552939]  [816efc88] nmi_handle.isra.3+0x88/0x180 
 [1502354.552997]  [816eff11] do_nmi+0x191/0x330 
 [1502354.553053]  [816ef201] end_repeat_nmi+0x1e/0x2e 
 [1502354.553111]  [813d46c2] ? intel_idle+0xc2/0x120 
 [1502354.553168]  [813d46c2] ? intel_idle+0xc2/0x120 
 [1502354.553226]  [813d46c2] ? intel_idle+0xc2/0x120 
 [1502354.553282]  EOE  [8159fe90] cpuidle_enter_state+0x40/0xc0 
 [1502354.553408]  [8159ffd9] cpuidle_idle_call+0xc9/0x210 
 [1502354.553466]  [8101bafe] arch_cpu_idle+0xe/0x30 
 [1502354.553523]  [810b54c5] cpu_startup_entry+0xe5/0x280 
 [1502354.553581]  [816d64b7] rest_init+0x77/0x80 
 [1502354.553638]  [81d26ef7] start_kernel+0x40a/0x416 
 [1502354.553695]  [81d268f6] ? repair_env_string+0x5c/0x5c 
 [1502354.553753]  [81d26120] ? early_idt_handlers+0x120/0x120 
 [1502354.553812]  [81d265de] x86_64_start_reservations+0x2a/0x2c 
 [1502354.553871]  [81d266e8] x86_64_start_kernel+0x108/0x117 
 [1502354.553929] ---[ end trace 166b62e89aa1f54b ]--- 

 'yay'. After that, a power reset in the console, it came up ok, just 
 needed a minor nudge to refresh it's heat configuration and we were up 
 and running again. 

 For some reason, neutron decided to rename it's agents at this point 
 and we had to remove and reattach the l3 agent before VM connectivity 
 was restored. 
 https://bugs.launchpad.net/tripleo/+bug/1284354 

 However, about 90 nodepool nodes were stuck in states like ACTIVE 
 deleting, and did not clear until we did a rolling restart of every 
 nova compute process. 
 https://bugs.launchpad.net/tripleo/+bug/1284356 

 Cheers, 
 Rob 

 -- 
 Robert Collins rbtcoll...@hp.com 
 Distinguished Technologist 
 HP Converged Cloud 

 ___ 
 OpenStack-Infra mailing list 
 openstack-in...@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] 1.13.0 release candidate

2014-02-25 Thread Thierry Carrez
Hi everyone,

A milestone-proposed branch was created for Swift in preparation for the
1.13.0 release in a few days.

Please test the proposed delivery to ensure no critical regression found
its way in. Release-critical fixes might be backported to the
milestone-proposed branch until final release, and will be tracked using
the 1.13.0 milestone targeting:

https://launchpad.net/swift/+milestone/1.13.0

You can find the candidate tarball at:
http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz

You can also access the milestone-proposed branch directly at:
https://github.com/openstack/swift/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][keystone] status of quota class

2014-02-25 Thread Mehdi Abaakouk
On Wed, Feb 19, 2014 at 10:27:38AM -0600, Kevin L. Mitchell wrote:
 On Wed, 2014-02-19 at 13:47 +0100, Mehdi Abaakouk wrote:
 
 Of course; anyone can propose a blueprint.  Who will you have work on
 the feature?
 
  ie: add a new API endpoint to set a quota_class to a project, store that
  into the db and change the quota engine to read the quota_class from the
  db instead of the RequestContext.
 
 Reading the quota class from the db sounds like a bad fit to me; this
 really feels like something that should be stored in Keystone, since
 it's authentication-related data.  Additionally, if the attribute is in
 Keystone, other services may take advantage of it.  The original goal of
 quota classes was to make it easier to update the quotas of a given
 tenant based on some criteria, such as the service level they've paid
 for; if a customer upgrades (or downgrades) their service level, their
 quotas should change to match.  This could be done by manually updating
 each quota that affects them, but a single change to a single attribute
 makes better sense.

Thanks for your comments,

This exactly what I have understand and what I needs, and I agree,
the keystone approach looks really better to me too, but perhaps
a bit more complicated to get accepted, this information is a kind of
metadata associated to a project or/and a domain, that should be
returned with the token validation like the service catalog.

I have found a not yet accepted blueprint on this subject:
https://blueprints.launchpad.net/keystone/+spec/service-metadata

The funny part is the API example use quota-class as metadata key :)

But whatever the approach, If a solution is accepted,
I'm sure I have people to work on this.


Regards, 
-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-25 Thread Khanh-Toan Tran
  I could do that but I think I need to be able to scale more without
  the need to use this much resources. I will like to simulate a cloud
  of 100 maybe
  1000 compute nodes that do nothing (Fake driver) this should not take
  this much memory. Anyone knows of a more efficient way to  simulate
  many computes? I was thinking changing the Fake driver to report many
  compute services in different threads instead of having to spawn a
  process per compute service. Any other ideas?

I'm not sure using threads is a good idea. We need a dedicated resources
pool for each compute. If the threads share the same resources pool, then
every new VM will change the available resources on all computes, which
may lead to unexpected  unpredicted scheduling result. For instance,
RamWeigher may return the same compute twice instead of spreading, because
at each time it finds out that the computes have the same free_ram.

Using compute inside LXC, I created 100 computes per physical host. Here
is what I did, it's very simple:
 -  Creating a LXC with logical volume
  - Installing a fake nova-compute inside the LXC
  - Make a booting script that modifies its nova.conf to use its IP
address  starts nova-compute
  - Using the LXC above as the master, clone as many compute as you like!

(Note that while cloning the LXC, the nova.conf is copied with the
former's IP address, that's why we need the booting script.)

Best regards,

Toan


 -Message d'origine-
 De : David Peraza [mailto:david_per...@persistentsys.com]
 Envoyé : lundi 24 février 2014 21:13
 À : OpenStack Development Mailing List (not for usage questions)
 Objet : Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

 Thanks John,

 I also think it is a good idea to test the algorithm at unit test level,
but I will like
 to try out over amqp as well, that is, we process and threads talking to
each
 other over rabbit or qpid. I'm trying to test out performance as well.

 Regards,
 David Peraza

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, February 24, 2014 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

 On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com
 wrote:
  Hello all,
 
  I have been trying some new ideas on scheduler and I think I'm
  reaching a resource issue. I'm running 6 compute service right on my 4
  CPU 4 Gig VM, and I started to get some memory allocation issues.
  Keystone and Nova are already complaining there is not enough memory.
  The obvious solution to add more candidates is to get another VM and
set
 another 6 Fake compute service.
  I could do that but I think I need to be able to scale more without
  the need to use this much resources. I will like to simulate a cloud
  of 100 maybe
  1000 compute nodes that do nothing (Fake driver) this should not take
  this much memory. Anyone knows of a more efficient way to  simulate
  many computes? I was thinking changing the Fake driver to report many
  compute services in different threads instead of having to spawn a
  process per compute service. Any other ideas?

 It depends what you want to test, but I was able to look at tuning the
filters and
 weights using the test at the end of this file:

https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_cachin
g
 _scheduler.py

 Cheers,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 DISCLAIMER
 ==
 This e-mail may contain privileged and confidential information which is
the
 property of Persistent Systems Ltd. It is intended only for the use of
the
 individual or entity to which it is addressed. If you are not the
intended recipient,
 you are not authorized to read, retain, copy, print, distribute or use
this
 message. If you have received this communication in error, please notify
the
 sender and delete all copies of this message. Persistent Systems Ltd.
does not
 accept any liability for virus infected mails.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-25 Thread Alexander Tivelkov
Hi Keith,

The question with Heat plugins is a good one. It actually has several
different answers.

The most important one is that the actual implementation of this approach
may be considered as out of scope for Murano. Think of Murano as an
application integration engine, which allows existing application to
communicate with each other and with the openstack infrastructure via
multiple interfaces. The possibility of this interaction does matter a lot,
while the actual implementation of the interfaces is much less important.
Application Publishers should be able to execute something like
'instance.takeSnapshot()' or
'databaseInstance.Migrate(anotherDatabaseInstance)' from the DSL-code of
their workflows, while the actual implementation of these actions may be
different for different classes. Murano will definitely come bundled with
the support of Heat templates as its primary OpenStack interface, so, if an
application publisher decides to execute some specific action via a custom
heat plugin, Murano will provide all means to use this plugin
out-of-the-box.
So, in general, the plugin-approach is perfectly fine, it already fits into
Murano, and that is intentional: our ultimate goal is to integrate Murano
into existing OpenStack infrastructure as much as possible. The final
desicion which approach to choose is up to the publisher of the specific
application though.

However, in some cases relying on the plugins seems to be a bad idea. When
you create a custom plugin for a custom task you have to install the plugin
to the Heat at the target cloud first. And this contradicts with the whole
purpose of application catalog, where all the needed dependencies of the
application are placed into its package by the publisher, and once the
package is put into the catalog it is immediately available for usage. We
cannot allow to put plugins into the packages, as plugins are actually
written in python, while allowing to upload and run an arbitrary python
code to untrusted users is definitely a security breach.
So, plugins are ok for the most common and popular tasks (but in this case
these types of resources should be eventually placed into the Heat itself,
right?), but may be a bad choice for custom tasks.

And the last but not least, the thing which Georgy has already mentioned: I
personally feel a bit uncomfortable when trying to think about the
processes and actions as objects persisted in a Heat stack. These are
different kinds of entities, and in my opinion events are dynamic and
transient, while the Heat stack is stable and persisted. Events can modify
the stack - and that is why we need the workflows - but they should not be
persisted on their own.  I don't know how to express it in a more formal
way, and that is just my gut feeling which definitely has to be discussed
more. I am sure we will come back to this topic when the DSL is finished
and we started implementing the bases class library for Murano and its
external interfaces with Heat and other services.

Thanks!

--
Regards,
 from 

Alexander Tivelkov


On Tue, Feb 25, 2014 at 1:44 AM, Keith Bray keith.b...@rackspace.comwrote:

  Have you considered writing Heat resource plug-ins that perform (or
 configure within other services) instance snapshots, backups, or whatever
 other maintenance workflow possibilities you want that don't exist?  Then
 these maintenance workflows you mention could be expressed in the Heat
 template forming a single place for the application architecture
 definition, including defining the configuration for services that need to
 be application aware throughout the application's life .  As you describe
 things in Murano, I interpret that you are layering application
 architecture specific information and workflows into a DSL in a layer above
 Heat, which means information pertinent to the application as an ongoing
 concern would be disjoint.  Fragmenting the necessary information to wholly
 define an infrastructure/application architecture could make it difficult
 to share the application and modify the application stack.

  I would be interested in a library that allows for composing Heat
 templates from snippets or fragments of pre-written Heat DSL... The
 library's job could be to ensure that the snippets, when combined, create a
 valid Heat template free from conflict amongst resources, parameters, and
 outputs.  The interaction with the library, I think, would belong in
 Horizon, and the Application Catalog and/or Snippets Catalog could be
 implemented within Glance.

  Also, there may be workflow steps which are not covered by Heat by
 design. For example, application publisher may include creating instance
 snapshots, data migrations, backups etc into the deployment or maintenance
 workflows. I don't see how these may be done by Heat, while Murano should
 definitely support this scenarios.

   From: Alexander Tivelkov ativel...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 

Re: [openstack-dev] [Horizon] Live migration

2014-02-25 Thread Matthias Runge
On Mon, Feb 24, 2014 at 06:08:56PM -0800, Dmitry Borodaenko wrote:
 Dear Horizon developers,
 
 I think that the blueprint to add live migrations support to
 Horizon[0] was incorrectly labeled as a duplicate of the earlier
 migrate-instance blueprint[1].
 
 [0] https://blueprints.launchpad.net/horizon/+spec/live-migration
 [1] https://blueprints.launchpad.net/horizon/+spec/migrate-instance
I think,
your [0] is a duplicate of [2], which was impleented during icehouse.

Matthias
[2]
https://blueprints.launchpad.net/horizon/+spec/live-migration-support
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Mon, 24 Feb 2014 21:15:30 -0500
Russell Bryant rbry...@redhat.com wrote:

 CC'ing the openstack-operators mailing list to get a wider set of
 feedback on this question.
 
 On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
  1) Continue as we have been, and plan to release v3 once we have a
  compelling enough feature set.
  
  So I think we should release in Juno even if its only with tasks and
  nova-network added. Because this allows new users to start using the
  API immediately rather than having to code against V2 (with its
  extra barriers to use) and then take the hit to upgrade later.
 
 OK, let's go a bit further with the case of marking the v3 API stable
 in Juno.  If we did that, what is a reasonable timeframe of v2 being
 deprecated before it could be removed?

So this might a lot more complicated to answer, but I'd also be
interested in how much of the v2 API people actually use in practice
(both users and deployers). I suspect there's bits that are either
never or rarely used that we could perhaps deprecate earlier which
would reduce the test/maintenance load. quota-classes is an example
which has been in since early 2012 and we only recently realised that it
doesn't actually do anything useful and so removed it.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-25 Thread Thierry Carrez
Georgy Okrokvertskhov wrote:
 [...]
 As you can see this is complicated topic with a number of possible
 solutions. What Murano team is seeking to achieve is to get feedback of
 community and TC on the most appropriate way to structure the governance
 model for the project.

And that should make for an interesting discussion. Like you say it's a
complicated topic, and we need a coherent and integrated solution in the
end. I think there are two unique challenges here.

The first is that Murano looks more like a complete solution (rather
than an infrastructure piece), which makes it span features that we
currently find in separate programs (user-facing catalog/discovery,
workload lifecycle management). My point in my earlier email is that we
could maybe bring that new functionality to OpenStack as a whole without
necessarily introducing a new component, by breaking those features into
the existing projects. Because in the end, OpenStack is not a collection
of overlapping products, it's a set of complementary components.

The second challenge is that we only started to explore the space of
workload lifecycle management, with what looks like slightly overlapping
solutions (Heat, Murano, Solum, and the openstack-compatible PaaS
options out there), and it might be difficult, or too early, to pick a
winning complementary set.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread John Garbutt
On 24 February 2014 18:11, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


 On 2/24/2014 10:13 AM, Russell Bryant wrote:

 On 02/24/2014 01:50 AM, Christopher Yeoh wrote:

 Hi,

 There has recently been some speculation around the V3 API and whether
 we should go forward with it or instead backport many of the changes
 to the V2 API. I believe that the core of the concern is the extra
 maintenance and test burden that supporting two APIs means and the
 length of time before we are able to deprecate the V2 API and return
 to maintaining only one (well two including EC2) API again.


 Yes, this is a major concern.  It has taken an enormous amount of work
 to get to where we are, and v3 isn't done.  It's a good time to
 re-evaluate whether we are on the right path.

 The more I think about it, the more I think that our absolute top goal
 should be to maintain a stable API for as long as we can reasonably do
 so.  I believe that's what is best for our users.  I think if you gave
 people a choice, they would prefer an inconsistent API that works for
 years over dealing with non-backwards compatible jumps to get a nicer
 looking one.

 The v3 API and its unit tests are roughly 25k lines of code.  This also
 doesn't include the changes necessary in novaclient or tempest.  That's
 just *our* code.  It explodes out from there into every SDK, and then
 end user apps.  This should not be taken lightly.

 This email is rather long so here's the TL;DR version:

 - We want to make backwards incompatible changes to the API
and whether we do it in-place with V2 or by releasing V3
we'll have some form of dual API support burden.
- Not making backwards incompatible changes means:
  - retaining an inconsistent API


 I actually think this isn't so bad, as discussed above.

  - not being able to fix numerous input validation issues


 I'm not convinced, actually.  Surely we can do a lot of cleanup here.
 Perhaps you have some examples of what we couldn't do in the existing API?

 If it's a case of wanting to be more strict, some would argue that the
 current behavior isn't so bad (see robustness principle [1]):

  Be conservative in what you do, be liberal in what you accept from
  others (often reworded as Be conservative in what you send, be
  liberal in what you accept).

 There's a decent counter argument to this, too.  However, I still fall
 back on it being best to just not break existing clients above all else.

  - have to forever proxy for glance/cinder/neutron with all
the problems that entails.


 I don't think I'm as bothered by the proxying as others are.  Perhaps
 it's not architecturally pretty, but it's worth it to maintain
 compatibility for our users.


 +1 to this, I think this is also related to what Jay Pipes is saying in his
 reply:


 Whether a provider chooses to, for example,
 deploy with nova-network or Neutron, or Xen vs. KVM, or support block
 migration for that matter *should have no effect on the public API*. The
 fact that those choices currently *do* effect the public API that is
 consumed by the client is a major indication of the weakness of the API.

 As a consumer, I don't want to have to know which V2 APIs work and which
 don't depending on if I'm using nova-network or Neutron.

Agreed, I thought thats why we are doing the proxying to neutron in
v2. We can't drop that.


- Backporting V3 infrastructure changes to V2 would be a
  considerable amount of programmer/review time


 Agreed, but so is the ongoing maintenance and development of v3.


 - The V3 API as-is has:
- lower maintenance
- is easier to understand and use (consistent).
- Much better input validation which is baked-in (json-schema)
  rather than ad-hoc and incomplete.


 So here's the rub ... with the exception of the consistency bits, none
 of this is visible to users, which makes me think we should be able to
 do all of this on v2.

 - Whilst we have existing users of the API we also have a lot more
users in the future. It would be much better to allow them to use
the API we want to get to as soon as possible, rather than trying
to evolve the V2 API and forcing them along the transition that they
could otherwise avoid.


 I'm not sure I understand this.  A key point is that I think any
 evolving of the V2 API has to be backwards compatible, so there's no
 forcing them along involved.

 - We already have feature parity for the V3 API (nova-network being
the exception due to the very recent unfreezing of it), novaclient
support, and a reasonable transition path for V2 users.

 - Proposed way forward:
- Release the V3 API in Juno with nova-network and tasks support
- Feature freeze the V2 API when the V3 API is released
  - Set the timeline for deprecation of V2 so users have a lot
of warning
  - Fallback for those who really don't want to move after
deprecation is an API service which translates between 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread John Garbutt
On 25 February 2014 06:11, Christopher Yeoh cbky...@gmail.com wrote:
 On Mon, 24 Feb 2014 17:37:04 -0800
 Dan Smith d...@danplanet.com wrote:

  onSharedStorage = True
  on_shared_storage = False

 This is a good example. I'm not sure it's worth breaking users _or_
 introducing a new microversion for something like this. This is
 definitely what I would call a purity concern as opposed to
 usability.

I thought micro versioning was so we could make backwards compatible changes.
If we make breaking changes we need to support the old and the new for
a little while.
I am tempted to say the breaking changes just create a new extension,
but there are other ways...

For return values:
* get new clients to send Accepts headers, to version the response
* this amounts to the major version
* for those request the new format, they get the new format
* for those getting the old format, they get the old format

For this case, on requests:
* we can accept both formats, or maybe that also depends on the
Accepts headers (with is a bit funky, granted).
* only document the new one
* maybe in two years remove the old format? maybe never?

Same for URLs, we could have the old a new names, with the new URL
always returning the new format (think instace_actions -
server_actions).

If the code only differers in presentation, that implies much less
double testing that two full versions of the API. It seems like we
could make some of these clean ups in, and keep the old version, with
relatively few changes.

We could port the V2 classes over to the V3 code, to get the code benefits.

Return codes are a bit harder, it seems odd to change those based on
Accepts headers, but maybe I could live with that.


Maybe this is the code mess we were trying to avoid, but I feel we
should at least see how bad this kind of approach would look?


 If it was just one case it wouldn't matter but when we're inconsistent
 across the whole API it is a usability issue because it makes it so
 much harder for a user of the API to learn it. They may for example
 remember that they need to pass a server id, but they also have to
 remember for a particular call whether it should be server_id,
 instance_uuid, or id. So referring to the documentation (assuming it is
 present and correct) becomes required even after using the API for an
 extended period of time. It also makes it much more error prone -
 simple typos are much less likely to be picked up by reviewers.

 Imagine we had to use a python library where sometimes the method and
 parameter names were in snake_case, others CamelCase. Sometimes a mix
 of the two in the same call. Sometimes it would refer to a widget as
 widget and other times you had to refer to it as thingy or the call
 failed. And if you passed the wrong parameters in it would sometimes
 just quietly ignore the bad ones and proceed as if everything was ok.

 Oh and other times it returned saying it had done the work you asked it
 to, when it really it meant I'll look at it, but it might not be able
 to (more on this below). I think most developers and reviewers would be
 banging their heads on their desks after a while.

I agree its a mess.

But rather than fork the code, can we find a better way of supporting
the old an new versions on a single (ideally cleaner) code base?

So users can migrate in their own timeframe, and we don't get a
complete maintenance nightmare in the process.

 Things like the twenty different datetime formats we expose _do_ seem
 worth the change to me as it requires the client to parse a bunch of
 different formats depending on the situation. However, we could solve
 that with very little code by just exposing all the datetimes again in
 proper format:

  {
   updated_at: %(random_weirdo)s,
   updated_at_iso: %(isotime)s,
  }

 Doing the above is backwards compatible and doesn't create code
 organizations based on any sort of pasta metaphor. If we introduce a
 discoverable version tag so the client knows if they will be
 available, I think we're good.

 Except we also now need to handle the case where both are passed in and
 end up disagreeing. And what about the user confusion where they see in
 most cases updated_at means one thing so they start assuming that it
 always means that, meaning they then get it wrong in the odd case out.
 Again, harder to code against, harder to review and is the unfortunate
 side effect of being too lax in what we accept.

I think accepts headers might help.

 URL inconsistencies seem not worth the trouble and I tend to think
 that the server vs. instance distinction probably isn't either,
 but I guess I'm willing to consider it.

 So again I think it comes down consistency increases usability - eg
 knowing that if you want to operate on a foo that you always access
 it through /foo rather than most of the time except for those cases when
 someone (almost certainly accidentally) ended up writing an interface
 where you modify a foo through /bar. The latter makes it much 

[openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-25 Thread Roman Podoliaka
Hi all,

[1] made it possible for openstack_citest MySQL user to create new
databases in tests on demand (which is very useful for parallel
running of tests on MySQL and PostgreSQL, thank you, guys!).

Unfortunately, openstack_citest user can only create tables in the
created databases, but not to perform SELECT/UPDATE/INSERT queries.
Please see the bug [2] filed by Joshua Harlow.

In PostgreSQL the user who creates a database, becomes the owner of
the database (and can do everything within this database), and in
MySQL we have to GRANT those privileges explicitly. But
openstack_citest doesn't have the permission to do GRANT (even on its
own databases).

I think, we could overcome this issue by doing something like this
while provisioning a node:
GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
'openstack_citest'@'localhost';

and then create databases giving them names starting with the prefix value.

Is it an acceptable solution? Or am I missing something?

Thanks,
Roman

[1] https://review.openstack.org/#/c/69519/
[2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread John Garbutt
On 25 February 2014 09:44, Christopher Yeoh cbky...@gmail.com wrote:
 On Mon, 24 Feb 2014 21:15:30 -0500
 Russell Bryant rbry...@redhat.com wrote:

 CC'ing the openstack-operators mailing list to get a wider set of
 feedback on this question.

 On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
  1) Continue as we have been, and plan to release v3 once we have a
  compelling enough feature set.
 
  So I think we should release in Juno even if its only with tasks and
  nova-network added. Because this allows new users to start using the
  API immediately rather than having to code against V2 (with its
  extra barriers to use) and then take the hit to upgrade later.

 OK, let's go a bit further with the case of marking the v3 API stable
 in Juno.  If we did that, what is a reasonable timeframe of v2 being
 deprecated before it could be removed?

 So this might a lot more complicated to answer, but I'd also be
 interested in how much of the v2 API people actually use in practice
 (both users and deployers). I suspect there's bits that are either
 never or rarely used that we could perhaps deprecate earlier which
 would reduce the test/maintenance load. quota-classes is an example
 which has been in since early 2012 and we only recently realised that it
 doesn't actually do anything useful and so removed it.

I think this is the big question.

I could see a chance of removing v2 after two years, maybe three
years. But two years is a long time to have the current two API code
bases, that are very similar, but a little bit different.

I think we need to find an alternative way to support the new and old
formats, like Accepts Headers, and retro-fitting a version to
extensions so we can easily advertise new attributes, to those parsers
that will break when they encounter those kinds of things.

Now I am tempted to say we morph the V3 code to also produce the V2
responses. And change the v3 API, so thats easier to do, and easier
for clients to move (like don't change URLs unless we really have to).
I know the risk for screwing that up is enormous, but maybe that makes
the most sense?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMWare] VMware VM snapshot

2014-02-25 Thread John Garbutt
On 25 February 2014 09:27, Qin Zhao chaoc...@gmail.com wrote:
 Hi,

 One simple question about VCenter driver. I feel the VM snapshot function of
 VCenter is very useful and is loved by VCenter users. Does anybody think
 about to let VCenter driver support it?

It depends if that can be modelled well with the current
Nova/Cinder/Glance primitives.

If you do boot from volume, and you see the volume snapshots, and they
behave how cinder expects, and you can model that snapshot as an image
in glance that you can boot new instances from, then maybe it would
work just fine. But we need to take care not to bend the current API
primitives too far out of place.

I remember there being some talk about this at the last summit. How did that go?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-25 Thread John Garbutt
On 24 February 2014 20:13, David Peraza david_per...@persistentsys.com wrote:
 Thanks John,

 I also think it is a good idea to test the algorithm at unit test level, but 
 I will like to try out over amqp as well, that is, we process and threads 
 talking to each other over rabbit or qpid. I'm trying to test out performance 
 as well.


Nothing beats testing the thing for real, of course.

As a heads up, the overheads of DB calls turned out to dwarf any
algorithmic improvements I managed. There will clearly be some RPC
overhead, but it didn't stand out as much as the DB issue.

The move to conductor work should certainly stop the scheduler making
those pesky DB calls to update the nova instance. And then,
improvements like no-db-scheduler and improvements to scheduling
algorithms should shine through much more.

Thanks,
John


 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, February 24, 2014 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes 
 for scheduler testing

 On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com 
 wrote:
 Hello all,

 I have been trying some new ideas on scheduler and I think I'm
 reaching a resource issue. I'm running 6 compute service right on my 4
 CPU 4 Gig VM, and I started to get some memory allocation issues.
 Keystone and Nova are already complaining there is not enough memory.
 The obvious solution to add more candidates is to get another VM and set 
 another 6 Fake compute service.
 I could do that but I think I need to be able to scale more without
 the need to use this much resources. I will like to simulate a cloud
 of 100 maybe
 1000 compute nodes that do nothing (Fake driver) this should not take
 this much memory. Anyone knows of a more efficient way to  simulate
 many computes? I was thinking changing the Fake driver to report many
 compute services in different threads instead of having to spawn a
 process per compute service. Any other ideas?

 It depends what you want to test, but I was able to look at tuning the 
 filters and weights using the test at the end of this file:
 https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_caching_scheduler.py

 Cheers,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 DISCLAIMER
 ==
 This e-mail may contain privileged and confidential information which is the 
 property of Persistent Systems Ltd. It is intended only for the use of the 
 individual or entity to which it is addressed. If you are not the intended 
 recipient, you are not authorized to read, retain, copy, print, distribute or 
 use this message. If you have received this communication in error, please 
 notify the sender and delete all copies of this message. Persistent Systems 
 Ltd. does not accept any liability for virus infected mails.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help a poor Nova Grizzy Backport Bug Fix

2014-02-25 Thread Alan Pevec
Hi Michael,

2014-02-25 5:07 GMT+01:00 Michael Davies mich...@the-davies.net:
 I have a Nova Grizzly backport bug[1] in review[2] that has been hanging
 around for 4 months waiting for one more +2 from a stable team person.

thanks for the backport! BTW stable team list is openstack-stable-maint (CCed).

 If there's someone kind enough to bump this through, it'd be appreciated ;)

Grizzly branches are supposed to receive only security and
life-support patches (those keeping them working when dependencies
change).
Currently Grizzly Nova needs https://review.openstack.org/76020 to
support latest Boto.

 [1] https://launchpad.net/bugs/1188543

That bug is Low - does that correctly reflect its priority?
Backports are generally  Medium.

 [2] https://review.openstack.org/#/c/54460/

I've sent it to check queue, it should fail due to Boto issue above.


Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Thierry Carrez
Sean Dague wrote:
 So, that begs a new approach. Because I think at this point even if we
 did put out Nova v3, there can never be a v4. It's too much, too big,
 and doesn't fit in the incremental nature of the project. So whatever
 gets decided about v3, the thing that's important to me is a sane way to
 be able to add backwards compatible changes (which we actually don't
 have today, and I don't think any other service in OpenStack does
 either), as well a mechanism for deprecating parts of the API. With some
 future decision about whether removing them makes sense.

I agree with Sean. Whatever solution we pick, we need to make sure it's
solid enough that it can handle further evolutions of the Nova API
without repeating this dilemma tomorrow. V2 or V3, we would stick to it
for the foreseeable future.

Between the cleanup of the API, the drop of XML support, and including a
sane mechanism for supporting further changes without major bumps of the
API, we may have enough to technically justify v3 at this point. However
from a user standpoint, given the surface of the API, it can't be
deprecated fast -- so this ideal solution only works in a world with
infinite maintenance resources.

Keeping V2 forever is more like a trade-off, taking into account the
available maintenance resources and the reality of Nova's API huge
surface. It's less satisfying technically, especially if you're deeply
aware of the API incoherent bits, and the prospect of living with some
of this incoherence forever is not really appealing.

Choosing between the two is about (1) assessing if we would have the
resources to maintain V2 and V3 in parallel for some time, and (2)
evaluating how dirty the V2 API is, how much of it we could fix in a
backward-compatible manner, and if we are ready to live with the
remaining dirtiness forever.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova Bug Scrub meeting

2014-02-25 Thread John Garbutt
On 25 February 2014 02:35, Tracy Jones tjo...@vmware.com wrote:
 Hi all - i have set up the nova bug scrub meeting for Wednesdays at 1630 UTC
 in the #openstack-meeting-3 IRC channel

 The first meeting will be all about triaging the 117 un-triaged bugs (here).

Thanks for setting this up.

I would rather we use the meeting as a status sync up, rather then
live triaging.
But I am willing to give the live triaging a go. It makes sure we get some done.

 https://wiki.openstack.org/wiki/Meetings/NovaBugScrub#Weekly_OpenStack_Nova_Bug_Scrub_Meeting

 Weekly on Wednesday at 1630 UTC

The invite I got wasn't fixed in UTC.
It might just have been my client that mangled that, but its worth a check.

 IRC channel: #openstack-meeting-3
 Chair (to contact for more information): Tracy Jones
 See Meetings/NovaBugScrub for an agenda

One idea of an extra agenda item. It would be good to take a look at
the un-owned bug tags:
https://wiki.openstack.org/wiki/Nova/BugTriage

Also, would be good to double check all the owners are still active in
their respective areas. Some areas might need more than one person
these days.

I added a few links from the Bug triage day emails into the wiki page,
which will hopefully help.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community meeting reminder - 02/25/2014

2014-02-25 Thread Alexander Tivelkov
Hi folks,

This is just a reminder that we are going to have a regular weekly meeting
in IRC today at 17:00 UTC (9am PST). The agenda is available at
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda#Agenda

This may be a good chance to discuss all the recent questions related to
Murano DSL and object-oriented approach, review the current status of our
incubation request and so on.
As usual, please feel free to add your own agenda items.


--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why doesn't _rollback_live_migration() always call rollback_live_migration_at_destination()?

2014-02-25 Thread John Garbutt
On 24 February 2014 22:14, Chris Friesen chris.frie...@windriver.com wrote:
 I'm looking at the live migration rollback code and I'm a bit confused.

 When setting up a live migration we unconditionally run
 ComputeManager.pre_live_migration() on the destination host to do various
 things including setting up networks on the host.

 If something goes wrong with the live migration in
 ComputeManager._rollback_live_migration() we will only call
 self.compute_rpcapi.rollback_live_migration_at_destination() if we're doing
 block migration or volume-backed migration that isn't shared storage.

 However, looking at ComputeManager.rollback_live_migration_at_destination(),
 I also see it cleaning up networking as well as block device.

 What happens if we have a shared-storage instance that we try to migrate and
 fail and end up rolling back?  Are we going to end up with messed-up
 networking on the destination host because we never actually cleaned it up?

I had some WIP code up to clean that up, as part as the move to
conductor, its massively confusing right now.

Looks like a bug to me.

I suspect the real issue is that some parts of:
self.driver.rollback_live_migration_at_destination(ontext, instance,
network_info, block_device_info)
Need more information about if there is shared storage being used or not.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Eugene Nikanorov
Hi Stephen,

My comments inline:


On Tue, Feb 25, 2014 at 6:07 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi y'all,

 Jay, in the L7 example you give, it looks like you're setting SSL
 parameters for a given load balancer front-end. Do you have an example you
 can share where where certain traffic is sent to one set of back-end nodes,
 and other traffic is sent to a different set of back-end nodes based on the
 URL in the client request? (I'm trying to understand how this can work
 without the concept of 'pools'.)  Also, what if the first group of nodes
 needs a different health check run against it than the second group of
 nodes?

Obviously any kind of loadbalancer API need to have a concept of 'pool' if
we want to go beyond single group of nodes.
So the API that is convenient at first glance will need to introduce all
those concepts that we already have.


 As far as hiding implementation details from the user:  To a certain
 degree I agree with this, and to a certain degree I do not: OpenStack is a
 cloud OS fulfilling the needs of supplying IaaS. It is not a PaaS. As such,
 the objects that users deal with largely are analogous to physical pieces
 of hardware that make up a cluster, albeit these are virtualized or
 conceptualized. Users can then use these conceptual components of a cluster
 to build the (virtual) infrastructure they need to support whatever
 application they want. These objects have attributes and are expected to
 act in a certain way, which again, are usually analogous to actual hardware.

I'm really not sure what Mark McClain on some other folks see as
implementation details. To me the 'instance' concept is as logical as
others (vips/pool/etc). But anyway, it looks like majority of those who
discuss, sees it as redundant concept.


 If we were building a PaaS, the story would be a lot different--  but what
 we are building is a cloud OS that provides Infrastructure (as a service).

 I think the concept of a 'load balancer' or 'load balancer service' is one
 of these building blocks that has attributes and is expected to act in a
 certain way. (Much the same way cinder provides block devices or swift
 provides an object store.) And yes, while you can do away with a lot of
 the implementation details and use a very simple model for the simplest use
 case, there are a whole lot of load balancer use cases more complicated
 than that which don't work with the current model (or even a small
 alteration to the current model). If you don't allow for these more
 complicated use cases, you end up with users stacking home-built software
 load balancers behind the cloud OS load balancers in order to get the
 features they actually need. (I understand this is a very common topology
 with ELB, because ELB simply isn't capable of doing advanced things, from
 the user's perspective.) In my opinion, we should be looking well beyond
 what ELB can do.

Agree on ELB. Existing public APIs (ELB/Libra) are not much better in terms
of feature coverage, than what we have already.


 :P Ideally, almost all users should not have to hack together their own
 load balancer because the cloud OS load balancer can't do what they need it
 to do.

Totally agree.



 Also, from a cloud administrator's point of view, the cloud OS needs to be
 aware of all the actual hardware components, virtual components, and other
 logical constructs that make up the cloud in order to be able to
 effectively maintain it.

Agree, however actual hardware is beyond logical LBaaS API but could be a
part of admin LBaaS API.


 Again, almost all the details of this should be hidden from the user. But
 these details must not be hidden from the cloud administrator. This means
 implementation details will be represented somehow, and will be visible to
 the cloud administrator.
 Yes, the focus needs to be on making the user's experience as simple as
 possible. But we shouldn't sacrifice powerful capabilities for a simpler
 experience. And if we ignore the needs of the cloud administrator, then we
 end up with a cloud that is next to impossible to practically administer.

 Do y'all disagree with this, and if so, could you please share your
 reasoning?

Personally I agree, that was always a priority to accommodate API for
simple and advanced scenarios.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-25 Thread Salvatore Orlando
I understand the fact that resources with invalid tenant_ids can be created
(only with admin rights at least for Neutron) can be annoying.

However, I support Jay's point on cross-project interactions. If tenant_id
validation (and orphaned resource management) can't be efficiently handled,
then I'd rather let 3rd party scripts dealing with orphaned and invalid
resources.

I reckon that it might be worth experimenting whether the notifications
sent by Keystone (see Dolph's post on this thread) can be used to deal with
orphaned resources.
For tenant_id validation, anything involving an extra round trip to
keystone would not be efficient in my opinion. If there is a way to perform
this validation in the same call which validates the tenant auth_token then
it's a different story.
Notifications from keystone *could* be used to build a local (persistent
perhaps) cache of active tenant identifiers. However, this would require
reliable notifications, as well as appropriate cache management, which is
often less simple than what it looks like.

Salvatore


On 25 February 2014 05:23, Lingxian Kong anlin.k...@gmail.com wrote:



 2014-02-25 11:25 GMT+08:00 Dong Liu willowd...@gmail.com:

 Thanks Jay, now I know maybe neutron will not handle tenant
 creating/deleting notifications which from keystone.

 There is another question, such as creating subnet request body:
 {
   subnet: {
 name: test_subnet,
 enable_dhcp: true,
 network_id: 57596b26-080d-4802-8cce-4318b7e543d5,
 ip_version: 4,
 cidr: 10.0.0.0/24,
 tenant_id: 4209c294d1bb4c36acdfaa885075e0f1


 So, this is exactly what I mean for 'temant_id' here that should be
 validated.
 I insist this could be done via some middleware or else.

   }
 }
 As we know, the tenant_id can only be specified by admin tenant.

 In my test, the tenant_id I filled in the body can be any string (e.g., a
 name, an uuid, etc.) But I think this tenant existence (I mean if the
 tenant exists in keystone) should be verified, if not, the subnet I created
 will be an useless resource.

 Regards,
 Dong Liu


 On 2014-02-25 0:22, Jay Pipes Wrote:

 On Mon, 2014-02-24 at 16:23 +0800, Lingxian Kong wrote:

 I think 'tenant_id' should always be validated when creating neutron
 resources, whether or not Neutron can handle the notifications from
 Keystone when tenant is deleted.


 -1

 Personally, I think this cross-service request is likely too expensive
 to do on every single request to Neutron. It's already expensive enough
 to use Keystone when not using PKI tokens, and adding another round trip
 to Keystone for this kind of thing is not appealing to me. The tenant is
 already validated when it is used to get the authentication token used
 in requests to Neutron, so other than the scenarios where a tenant is
 deleted in Keystone (which, with notifications in Keystone, there is now
 a solution for), I don't see much value in the extra expense this would
 cause.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *---*
 *Lingxian Kong*
 Huawei Technologies Co.,LTD.
 IT Product Line CloudOS PDU
 China, Xi'an
 Mobile: +86-18602962792
 Email: konglingx...@huawei.com; anlin.k...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Sylvain Bauza
Hi,

Thanks to WSME 0.6, there is now possibility to add extra attributes to a
Dynamic basetype.
I successfully ended up showing my extra attributes from a dict to a
DynamicType using add_attributes() but I'm now stuck with POST requests
having dynamic body data.

Although I'm declaring in wsexpose() my DynamicType, I can't say to WSME to
map the pecan.request.body dict with my wsattrs and create new attributes
if none matched.

Any idea on how to do this ? I looked at WSME and the type is registered at
API startup, not when being called, so the get_arg() method fails to fill
in the gaps.

I can possibly do a workaround within my post function, where I could
introspect pecan.request.body and add extra attributes, so it sounds a bit
crappy as I have to handle the mimetype already managed by WSME.


Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Artifacts] Artifact dependencies: Strict vs Soft

2014-02-25 Thread Alexander Tivelkov
Hi folks,

While I am still working on designing artifact-related APIs (sorry, the
task is taking me longer then expected due to a heavy load in Murano,
related to the preparation of incubation request), I've got a topic I
wanted to discuss with the broader audience.

It seems like we have agreed on the idea that the artifact storage should
support dependencies between the artifacts: ability for any given artifact
to reference some other artifacts as its dependencies, and the API call
which will allow to retrieve all the dependency graph of the given artifact
(i.e. its direct and transitive dependecies)

Another idea which was always kept in mind when we were designing the
artifact concept was artifact versioning: the system should allow to store
different artifact having the identical name but different versions, and
the API should be able to return the latest (based on some notation)
version of the artifact. Being able to construct such a queries actually
gives an ability to define kind of aliases, so the url like
/v2/artifacts?type=imagename=ubuntuversion=latest will always return the
latest version of the given artifact (ubuntu image in this case). The need
to be able to define such aliaces was expressed in [1], and the ability
to satisfy this need with artifact API was mentioned at [2]

But combining these two ideas brings up an interesting question: how should
artifacts define their dependencies? Should this be an explicit strict
reference (i.e. referencing the specific artifact by its id), or it should
be an implicit soft reference, similar to the alias described above (i.e.
specifying the dependency as A requires the latest version of B or even
A requires 0.2=B0.3)?
The later seems familiar: it is similar to pip dependency specification,
right? This approach obviosuly may be very usefull (at least I clearly see
its benefits for Murano's application packages), but it implies lazy
evaluation, which may dramatically impact the performance.
In contrary, the former approach - with explicit references - requires much
less computation. Even more, if we decide that the artifact dependencies
are immutable, this will allow us to denormalize the storage of the
dependency graph and store all the transitive dependencies of the given
artifact in a flat table, so the dependency graph may be returned by a
sinle SQL query, without a need for recursive calls, which are otherwise
unavoidable in the normalized database storing such hierarchical
structures.

Meanwhile, the mutability of dependencis is also unclear to me: ability to
modify them seems to have its own pros and cons, so this is another topic
to dicsuss.

I'd like to hear your opinion on all of these. Any feedback is welcome, and
we may come back to this topic on the Thursday's meeting.


Thanks!


[1] https://blueprints.launchpad.net/glance/+spec/glance-image-aliases
[2] https://blueprints.launchpad.net/glance/+spec/artifact-repository-api


--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Tue, 25 Feb 2014 10:31:42 +
John Garbutt j...@johngarbutt.com wrote:

 On 25 February 2014 06:11, Christopher Yeoh cbky...@gmail.com wrote:
  On Mon, 24 Feb 2014 17:37:04 -0800
  Dan Smith d...@danplanet.com wrote:
 
   onSharedStorage = True
   on_shared_storage = False
 
  This is a good example. I'm not sure it's worth breaking users _or_
  introducing a new microversion for something like this. This is
  definitely what I would call a purity concern as opposed to
  usability.
 
 I thought micro versioning was so we could make backwards compatible
 changes. If we make breaking changes we need to support the old and
 the new for a little while.

Isn't the period that we have to support the old and the new for these
sorts of breaking the changes exactly the same period of time that we'd
have to keep V2 around if we released V3? Either way we're forcing
people off the old behaviour. 

 I am tempted to say the breaking changes just create a new extension,
 but there are other ways...

Oh, please no :-) Essentially that is no different to creating a new
extension in the v3 namespace except it makes the v2 namespace even
more confusing?

 For return values:
 * get new clients to send Accepts headers, to version the response
 * this amounts to the major version
 * for those request the new format, they get the new format
 * for those getting the old format, they get the old format
 
 For this case, on requests:
 * we can accept both formats, or maybe that also depends on the
 Accepts headers (with is a bit funky, granted).
 * only document the new one
 * maybe in two years remove the old format? maybe never?
 

So the idea of accept headers seems to me like just an alternative to
using a different namespace except a new namespace is much cleaner.

 Same for URLs, we could have the old a new names, with the new URL
 always returning the new format (think instace_actions -
 server_actions).
 
 If the code only differers in presentation, that implies much less
 double testing that two full versions of the API. It seems like we
 could make some of these clean ups in, and keep the old version, with
 relatively few changes.

As I've said before the API layer is very thin. Essentially most of it
is just about parsing the input, calling something, then formatting the
output. But we still do double testing even though the difference
between them most of the time is just presentation.  Theoretically if
the unittests were good enough in terms of checking the API we'd only
have to tempest test a single API but I think experience has shown that
we're not that good at doing exhaustive unittests. So we use the
fallback of throwing tempest at both APIs

 We could port the V2 classes over to the V3 code, to get the code
 benefits.

I'm not exactly sure what you mean here. If you mean backporting say
the V3 infrastructure so V2 can use it, I don't want people
underestimating the difficulty of that. When we developed the new
architecture we had the benefit of being able to bootstrap it without
it having to work for a while. Eg. getting core bits like servers and
images up and running without having to have the additional parts which
depend on it working with it yet. With V2 we can't do that, so
operating on a active system is going to be more difficult. The CD
people will not be happy with breakage :-)

But even then it took a considerable amount of effort - both coding and
review to get the changes merged, and that was back in Havana when it
was easier to review bandwidth. And we also discovered that especially
with that sort of infrastructure work its very difficult to get many
people working parallel - or even one person working on too many things
at one time. Because you end up in merge confict/rebase hell. I've been
there a lot in Havana and Icehouse.

 Return codes are a bit harder, it seems odd to change those based on
 Accepts headers, but maybe I could live with that.
 
 
 Maybe this is the code mess we were trying to avoid, but I feel we
 should at least see how bad this kind of approach would look?

So to me this approach really doesn't look a whole lot different to
just having a separate v2/v3 codebase in terms of maintenance. LOC
would be lower, but testing load is similar if we make the same sorts
of changes. Some things like input validation are a bit harder to
implement (because you need quite lax input validation for v2-old and
strict for v2-new).

Also how long are we going to spend on this sort of exploratory work?
The longer we take on it, the more we risk V3 slipping in Juno if we
take that route.

If we really need a super long deprecation period for V2 I'm going to
suggest again the idea of V2 proxy which translates to V3 speak and does
the necessary proxying. From a testing point of view we'd only need to
test that input and output of the proxy (ie correct V3 code requests are
emitted and correct V2 output is returned). And we already have tempest
tests for V2 which we could use for more general 

Re: [openstack-dev] [Nova][VMWare] VMware VM snapshot

2014-02-25 Thread Qin Zhao
What I mean is the snapshot of vsphere, which is describe in this page --
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-CA948C69-7F58-4519-AEB1-739545EA94E5.html

It is very useful, if the user plan to perform some risky operations in a
VM. I am not quite sure if we can model it in Nova, and let the user to
create snapshot chain via Nova api. Has it been discussed in design session
or mail group? Anybody know that?


On Tue, Feb 25, 2014 at 6:40 PM, John Garbutt j...@johngarbutt.com wrote:

 On 25 February 2014 09:27, Qin Zhao chaoc...@gmail.com wrote:
  Hi,
 
  One simple question about VCenter driver. I feel the VM snapshot
 function of
  VCenter is very useful and is loved by VCenter users. Does anybody think
  about to let VCenter driver support it?

 It depends if that can be modelled well with the current
 Nova/Cinder/Glance primitives.

 If you do boot from volume, and you see the volume snapshots, and they
 behave how cinder expects, and you can model that snapshot as an image
 in glance that you can boot new instances from, then maybe it would
 work just fine. But we need to take care not to bend the current API
 primitives too far out of place.

 I remember there being some talk about this at the last summit. How did
 that go?

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova Bug Scrub meeting

2014-02-25 Thread Kashyap Chamarthy
On Tue, Feb 25, 2014 at 11:07:59AM +, John Garbutt wrote:
 On 25 February 2014 02:35, Tracy Jones tjo...@vmware.com wrote:
  Hi all - i have set up the nova bug scrub meeting for Wednesdays at 1630 UTC
  in the #openstack-meeting-3 IRC channel
 
  The first meeting will be all about triaging the 117 un-triaged bugs (here).
 
 Thanks for setting this up.
 
 I would rather we use the meeting as a status sync up, rather then
 live triaging.

Sounds good.

 But I am willing to give the live triaging a go. It makes sure we get
 some done.

FWIW, /me conducted and participated[1] in two live bug triage meetings
for RDO project recently, it was surprisingly productive :-) 

We can co-ordinate on an etherpad instance for triage notes.

I'll volunteer to triage Libvirt/KVM/QEMU bugs. (Although, I should
admit I didn't quite manage to kept on top of all the bugs in a
consistent manner, will try.)


  [1] https://www.redhat.com/archives/rdo-list/2014-February/msg00050.html

 
  https://wiki.openstack.org/wiki/Meetings/NovaBugScrub#Weekly_OpenStack_Nova_Bug_Scrub_Meeting
 
  Weekly on Wednesday at 1630 UTC

I may not be active on IRC at that time (I'm in UTC+5:30), but will
participate during my time-zone.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] An analysis of code review in Nova

2014-02-25 Thread Matthew Booth
I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
graphs, both attached to this mail.

Both graphs show a nova directory tree pruned at the point that a
directory contains less than 2% of total LOCs. Additionally, /tests and
/locale are pruned as they make the resulting graph much busier without
adding a great deal of useful information. The data for both graphs was
generated from the most recent 1000 changes in gerrit on Monday 24th Feb
2014. This includes all pending changes, just over 500, and just under
500 recently merged changes.

pending.svg shows the percentage of LOCs which have an outstanding
change against them. This is one measure of how hard it is to write new
code in Nova.

merged.svg shows the average length of time between the
ultimately-accepted version of a change being pushed and being approved.

Note that there are inaccuracies in these graphs, but they should be
mostly good. Details of generation here:
https://github.com/mdbooth/heatmap. This code is obviously
single-purpose, but is free for re-use if anyone feels so inclined.

The first graph above (pending.svg) is the one I was most interested in,
and shows exactly what I expected it to. Note the size of 'vmwareapi'.
If you check out Nova master, 24% of the vmwareapi driver has an
outstanding change against it. It is practically impossible to write new
code in vmwareapi without stomping on an oustanding patch. Compare that
to the libvirt driver at a much healthier 3%.

The second graph (merged.svg) is an attempt to look at why that is.
Again comparing the VMware driver with the libvirt we can see that at 12
days, it takes much longer for a change to be approved in the VMware
driver than in the libvirt driver. I suspect that this isn't the whole
story, which is likely a combination of a much longer review time with
very active development.

What's the impact of this? As I said above, it obviously makes it very
hard to come in as a new developer of the VMware driver when almost a
quarter of it has been rewritten, but you can't see it. I am very new to
this and others should validate my conclusions, but I also believe this
is having a detrimental impact to code quality. Remember that the above
12 day approval is only the time for the final version to be approved.
If a change goes through multiple versions, each of those also has an
increased review period, meaning that the time from first submission to
final inclusion is typically very, very protracted. The VMware driver
has its fair share of high priority issues and functionality gaps, and
the developers are motived to get it in the best possible shape as
quickly as possible. However, it is my impression that when problems
stem from structural issues, the developers choose to add metaphorical
gaffer tape rather than fix them, because fixing both creates a
dependency chain which pushes the user-visible fix months into the
future. In this respect the review process is dysfunctional, and is
actively detrimental to code quality.

Unfortunately I'm not yet sufficiently familiar with the project to
offer a solution. A core reviewer who regularly looks at it is an
obvious fix. A less obvious fix might involve a process which allows
developers to work on a fork which is periodically merged, rather like
the kernel.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
attachment: pending.svgattachment: merged.svg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-25 Thread Dong Liu

Salvatore, thank you very much for your reply.

I know that there was a proposal[1] to handle the message security 
stuff. For this proposal implementation, there was a blueprint[2] of 
keystone will merge in Icehouse.


I'm looking forward to the notification handling could implemente in 
Juno. Although I'm a new bee here, if it is possible, I wish I can take 
part in this in the days to come.


[1] https://wiki.openstack.org/wiki/MessageSecurity
[2] https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

Regards,
Dong Liu

On 2014-02-25 19:48, Salvatore Orlando Wrote:

I understand the fact that resources with invalid tenant_ids can be
created (only with admin rights at least for Neutron) can be annoying.

However, I support Jay's point on cross-project interactions. If
tenant_id validation (and orphaned resource management) can't be
efficiently handled, then I'd rather let 3rd party scripts dealing with
orphaned and invalid resources.

I reckon that it might be worth experimenting whether the notifications
sent by Keystone (see Dolph's post on this thread) can be used to deal
with orphaned resources.
For tenant_id validation, anything involving an extra round trip to
keystone would not be efficient in my opinion. If there is a way to
perform this validation in the same call which validates the tenant
auth_token then it's a different story.
Notifications from keystone *could* be used to build a local (persistent
perhaps) cache of active tenant identifiers. However, this would require
reliable notifications, as well as appropriate cache management, which
is often less simple than what it looks like.

Salvatore


On 25 February 2014 05:23, Lingxian Kong anlin.k...@gmail.com
mailto:anlin.k...@gmail.com wrote:



2014-02-25 11:25 GMT+08:00 Dong Liu willowd...@gmail.com
mailto:willowd...@gmail.com:

Thanks Jay, now I know maybe neutron will not handle tenant
creating/deleting notifications which from keystone.

There is another question, such as creating subnet request body:
{
   subnet: {
 name: test_subnet,
 enable_dhcp: true,
 network_id: 57596b26-080d-4802-8cce-__4318b7e543d5,
 ip_version: 4,
 cidr: 10.0.0.0/24 http://10.0.0.0/24,
 tenant_id: __4209c294d1bb4c36acdfaa885075e0__f1


So, this is exactly what I mean for 'temant_id' here that should be
validated.
I insist this could be done via some middleware or else.

   }
}
As we know, the tenant_id can only be specified by admin tenant.

In my test, the tenant_id I filled in the body can be any string
(e.g., a name, an uuid, etc.) But I think this tenant existence
(I mean if the tenant exists in keystone) should be verified, if
not, the subnet I created will be an useless resource.

Regards,
Dong Liu


On 2014-02-25 0:22, Jay Pipes Wrote:

On Mon, 2014-02-24 at 16:23 +0800, Lingxian Kong wrote:

I think 'tenant_id' should always be validated when
creating neutron
resources, whether or not Neutron can handle the
notifications from
Keystone when tenant is deleted.


-1

Personally, I think this cross-service request is likely too
expensive
to do on every single request to Neutron. It's already
expensive enough
to use Keystone when not using PKI tokens, and adding
another round trip
to Keystone for this kind of thing is not appealing to me.
The tenant is
already validated when it is used to get the
authentication token used
in requests to Neutron, so other than the scenarios where a
tenant is
deleted in Keystone (which, with notifications in Keystone,
there is now
a solution for), I don't see much value in the extra expense
this would
cause.

Best,
-jay



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an

[openstack-dev] Proposal for filtering search option to backup list

2014-02-25 Thread 한승진
There is a blueprint for cinder that is search option for listing backup.

We can search cinder volumes and snapshots for filtering display_name or
status.

We do need search option for backup list also.

I posted blueprint for filtering option for backup list

https://blueprints.launchpad.net/python-cinderclient/+spec/add-filter-options-to-backup-list

Could you approve this blueprint in order to start developing?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help a poor Nova Grizzy Backport Bug Fix

2014-02-25 Thread Alan Pevec
 [2] https://review.openstack.org/#/c/54460/

 I've sent it to check queue, it should fail due to Boto issue above.

It also failed due to pyopenssl 0.14 update pulling a new dep which
fails to build in Grizzly devstack,
should be fixed by https://review.openstack.org/76189

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Ken'ichi Ohmichi
2014-02-25 19:48 GMT+09:00 Thierry Carrez thie...@openstack.org:
 Sean Dague wrote:
 So, that begs a new approach. Because I think at this point even if we
 did put out Nova v3, there can never be a v4. It's too much, too big,
 and doesn't fit in the incremental nature of the project. So whatever
 gets decided about v3, the thing that's important to me is a sane way to
 be able to add backwards compatible changes (which we actually don't
 have today, and I don't think any other service in OpenStack does
 either), as well a mechanism for deprecating parts of the API. With some
 future decision about whether removing them makes sense.

 I agree with Sean. Whatever solution we pick, we need to make sure it's
 solid enough that it can handle further evolutions of the Nova API
 without repeating this dilemma tomorrow. V2 or V3, we would stick to it
 for the foreseeable future.

 Between the cleanup of the API, the drop of XML support, and including a
 sane mechanism for supporting further changes without major bumps of the
 API, we may have enough to technically justify v3 at this point. However
 from a user standpoint, given the surface of the API, it can't be
 deprecated fast -- so this ideal solution only works in a world with
 infinite maintenance resources.

 Keeping V2 forever is more like a trade-off, taking into account the
 available maintenance resources and the reality of Nova's API huge
 surface. It's less satisfying technically, especially if you're deeply
 aware of the API incoherent bits, and the prospect of living with some
 of this incoherence forever is not really appealing.

What is the maintenance cost for keeping both APIs?
I think Chris and his team have already paid most part of it, the
works for porting
the existing v2 APIs to v3 APIs is almost done.
So I'd like to clarify the maintenance cost we are discussing.

If the cost means that we should implement both API methods when creating a
new API, how about implementing internal proxy from v2 to v3 API?
When creating a new API, it is enough to implement API method for v3 API. and
when receiving a v2 request, Nova translates it to v3 API.
The request styles(url, body) of v2 and v3 are different and this idea makes new
v2 APIs v3 style. but now v2 API has already a lot of inconsistencies.
so it does not seem so big problem.


From the viewpoint of OpenStack interoperability also, I believe we
need a new API.
Many v2 API parameters are not validated. If implementing strict
validation for v2 API,
incompatibility issues happen. That is why we are implementing input
validation for
v3 API. If staying v2 API forever, we should have this kind of problem forever.
v2 API is fragile now. So the interoperability should depend on v2
API, that seems
sandbox.. (I know that it is a little overstatement, but we have found
a lot of this kind
of problem already..)


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Doug Hellmann
On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes to a
 Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to WSME
 to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is registered
 at API startup, not when being called, so the get_arg() method fails to
 fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.


I'm not sure I understand the question. Are you saying that the dynamic
type feature works for GET arguments but not POST body content?

Doug





 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

2014-02-25 Thread Sylvain Bauza
Hi Don,

Maybe it would be worth discussing on how we could share the blueprints
with people willing to help ?

-Sylvain


2014-02-24 18:08 GMT+01:00 Dugger, Donald D donald.d.dug...@intel.com:

  All-



 I'm tempted to cancel the gantt meeting for tomorrow.  The only topics I
 have are the no-db scheduler update (we can probably do that via email) and
 the gantt code forklift (I've been out with the flu and there's no progress
 on that).



 I'm willing to chair but I'd like to have some specific topics to talk
 about.



 Suggestions anyone?



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Sylvain Bauza
Let me give you a bit of code then, that's currently WIP with heavy
rewrites planned on the Controller side thanks to Pecan hooks [1]

So, L102 (GET request) the convert() method is passing the result dict as
kwargs, where the Host.__init__() method is adding dynamic attributes.
That does work :-)

L108, I'm specifying that my body string is basically an Host object.
Unfortunately, I can provide extra keys to that where I expect to be extra
attributes. WSME will then convert the body into an Host [2], but as the
Host class doesn't yet know which extra attributes are allowed, none of my
extra keys are taken.
As a result, the 'host' (instance of Host) argument of the post() method is
not containing the extra attributes and thus, not passed for creation to my
Manager.

As said, I can still get the request body using Pecan directly within the
post() method, but I then would have to manage the mimetype, and do the
adding of the extra attributes there. That's pretty ugly IMHO.

Thanks,
-Sylvain

[1] http://paste.openstack.org/show/69418/

[2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71


2014-02-25 14:39 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:




 On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes to a
 Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to WSME
 to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is registered
 at API startup, not when being called, so the get_arg() method fails to
 fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.


 I'm not sure I understand the question. Are you saying that the dynamic
 type feature works for GET arguments but not POST body content?

 Doug





 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Alex Xu

On 2014年02月25日 21:17, Ken'ichi Ohmichi wrote:

2014-02-25 19:48 GMT+09:00 Thierry Carrez thie...@openstack.org:

Sean Dague wrote:

So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project. So whatever
gets decided about v3, the thing that's important to me is a sane way to
be able to add backwards compatible changes (which we actually don't
have today, and I don't think any other service in OpenStack does
either), as well a mechanism for deprecating parts of the API. With some
future decision about whether removing them makes sense.

I agree with Sean. Whatever solution we pick, we need to make sure it's
solid enough that it can handle further evolutions of the Nova API
without repeating this dilemma tomorrow. V2 or V3, we would stick to it
for the foreseeable future.

Between the cleanup of the API, the drop of XML support, and including a
sane mechanism for supporting further changes without major bumps of the
API, we may have enough to technically justify v3 at this point. However
from a user standpoint, given the surface of the API, it can't be
deprecated fast -- so this ideal solution only works in a world with
infinite maintenance resources.

Keeping V2 forever is more like a trade-off, taking into account the
available maintenance resources and the reality of Nova's API huge
surface. It's less satisfying technically, especially if you're deeply
aware of the API incoherent bits, and the prospect of living with some
of this incoherence forever is not really appealing.

What is the maintenance cost for keeping both APIs?
I think Chris and his team have already paid most part of it, the
works for porting
the existing v2 APIs to v3 APIs is almost done.
So I'd like to clarify the maintenance cost we are discussing.

If the cost means that we should implement both API methods when creating a
new API, how about implementing internal proxy from v2 to v3 API?
When creating a new API, it is enough to implement API method for v3 API. and
when receiving a v2 request, Nova translates it to v3 API.
The request styles(url, body) of v2 and v3 are different and this idea makes new
v2 APIs v3 style. but now v2 API has already a lot of inconsistencies.
so it does not seem so big problem.

I want to ask this question too. What is the maintenance cost?
When we release v3 api, we will freeze v2 api. So we won't add any new 
API into v2,

So is that mean the maintenance cost is much less after v2 api froze?
What I know is we should keep compute-api keep back-compatibility with 
v2 api. What

else except that?



From the viewpoint of OpenStack interoperability also, I believe we
need a new API.
Many v2 API parameters are not validated. If implementing strict
validation for v2 API,
incompatibility issues happen. That is why we are implementing input
validation for
v3 API. If staying v2 API forever, we should have this kind of problem forever.
v2 API is fragile now. So the interoperability should depend on v2
API, that seems
sandbox.. (I know that it is a little overstatement, but we have found
a lot of this kind
of problem already..)


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-25 Thread Sandhya Dasu (sadasu)
Hi,
As a follow up from today's IRC, Irena, are you looking to write the
below mentioned Base/Mixin class that inherits from
AgentMechanismDriverBase class? When you mentioned port state, were you
referring to the validate_port_binding() method?

Pls clarify.

Thanks,
Sandhya

On 2/6/14 7:57 AM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Hi Bob and Irena,
   Thanks for the clarification. Irena, I am not opposed to a
SriovMechanismDriverBase/Mixin approach, but I want to first figure out
how much common functionality there is. Have you already looked at this?

Thanks,
Sandhya

On 2/5/14 1:58 AM, Irena Berezovsky ire...@mellanox.com wrote:

Please see inline my understanding

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support of
 SR-IOV ports.

I'll try, but I think these questions might be more about how the various
SR-IOV implementations will work than about ML2 itself...

 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to
 the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific SR-IOV
implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the nova
VIF driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with other
binding:vif_details attributes, VIFDriver should do the VIF pluging part.
As for required networking configuration is required, it is usually done
either by L2 Agent or external Controller, depends on MD.

 
 2. Also, how do we handle the functionality in mech_agent.py, within
 the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent would
inherit the AgentMechanismDriverBase class if it provides useful
functionality, but any MechanismDriver implementation is free to not use
this base class if its not applicable. I'm not sure if an
SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being
planned, and how that would relate to AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a need
for SriovMechanismDriverBase/Mixin that provides all the generic
functionality and helper methods that are common to SRIOV ports.
-Bob

 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Irena Berezovsky
 ire...@mellanox.com mailto:ire...@mellanox.com, Robert Li (baoli)
 ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, Brian Bowen
 (brbowen) brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and myself
 are moving to openstack-meeting. Hopefully, Bob Kukura  Irena can
 join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com, Robert Li (baoli) ba...@cisco.com
 mailto:ba...@cisco.com, Robert Kukura rkuk...@redhat.com
 mailto:rkuk...@redhat.com, OpenStack Development Mailing List (not
for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi all,
 Both openstack-meeting and openstack-meeting-alt are available
 today. Lets meet at UTC 2000 @ openstack-meeting-alt.
 
 Thanks,
 Sandhya
 
 From: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com
 Date: Monday, February 3, 2014 12:52 AM
 To: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com, Robert
 Li (baoli) ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, OpenStack
 Development Mailing List (not for usage questions)
 

[openstack-dev] Devstack Error

2014-02-25 Thread trinath.soman...@freescale.com
Hi Stackers-

When I configured Jenkins to run the Sandbox tempest testing, While devstack is 
running,
I have seen error

ERROR: Invalid Openstack Nova credentials

and another error

ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries exceeded 
wuth url: /v2/91dd(caused by class 'socket.error': [Errno 111] Connection 
refused)

I feel devstack automates the openstack environment.

Kindly guide me resolve the issue.

Thanks in advance.


--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Martinez, Christian
Yeah.. That could be a good approach.
Just add some extra info to the tenant creation.. Then, on climate nova, when a 
resource is created, check by the tenant id if that tenant has a lease param 
(or smth like that). If it does, then act accordingly..
Is that make sense? Dina, Sylvain ??


-Original Message-
From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com] 
Sent: Thursday, February 20, 2014 4:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

I agree with Bauza that the main purpose of Climate is to reserve resources, 
and in the case of keystone it should reserve tenant, users, domains, etc.

So, it could be possible that climate is not the module in which the tenant 
lease information should be saved. As stated in the use case, the only 
purpose of this BP is to allow the creation of tenants with start and end 
dates. Then when creating resources in that tenant (like VMs) climate could 
take lease information from the tenant itself and create actual leases for 
the VMs.

Any thoughts of this?

From: Sylvain Bauza sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: jueves, 20 de febrero de 2014 15:57
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design




2014-02-20 19:32 GMT+01:00 Dina Belova 
dbel...@mirantis.commailto:dbel...@mirantis.com:
Sylvain, as I understand in BP description, Christian is about not exactly 
reserving tenants itself like we actually do with VMs/hosts - it's just naming 
for that. I think he is about two moments:

1) mark some tenants as needed to be reserved - speaking about resources 
assigned to it
2) reserve these resources via Climate (VMs for first approximation)


Well, I understood your BP, that's Christian's message which was a bit 
misunderstanding.
Speaking of marking a tenant as reserved would then mean that it does have 
kind of priority vs. another tenant. But again, at said, how could you ensure 
at the marking (ie. at lease creation) that Climate can honor contracts with 
resources that haven't been explicitely defined ?

I suppose Christian is speaking now about hacking tenants creation process to 
mark them as needed to be reserved (1st step).


Again, a lease is mutually and exclusively linked with explicit resources. If 
you say create a lease, for the love without speaking of what, I don't see 
the interest in Climate, unless I missed something obvious.

-Sylvain
Christian, correct me if I'm wrong, please Waiting for your comments


On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com wrote:
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian 
christian.marti...@intel.commailto:christian.marti...@intel.com:

Hello all,
I'm working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create special tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.


Before speaking about implementation,  I would definitely know the usecases you 
want to design.
What kind of resources do you want to provision using Climate ? The basic thing 
is, what is the rationale thinking about hooking tenant creation ? Could you 
please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of 
calculating the resources asked, because the resources wouldn't have been 
allocated before. So, generating a lease on top of this would be like a 
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests or 
projects requests, meaning that's duty of Climate to guarantee that the desired 
associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains. In 
that case, if Climate would be hooking Keystone, that would say that Climate 
ensures that the cloud will have enough capacity for creating these resources 
in the future.

IMHO, that's not worth implementing it.


First of all, we need to add some parameters or flags during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it's being done with climate-nova) 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Sean Dague
On 02/25/2014 08:17 AM, Ken'ichi Ohmichi wrote:
 2014-02-25 19:48 GMT+09:00 Thierry Carrez thie...@openstack.org:
 Sean Dague wrote:
 So, that begs a new approach. Because I think at this point even if we
 did put out Nova v3, there can never be a v4. It's too much, too big,
 and doesn't fit in the incremental nature of the project. So whatever
 gets decided about v3, the thing that's important to me is a sane way to
 be able to add backwards compatible changes (which we actually don't
 have today, and I don't think any other service in OpenStack does
 either), as well a mechanism for deprecating parts of the API. With some
 future decision about whether removing them makes sense.

 I agree with Sean. Whatever solution we pick, we need to make sure it's
 solid enough that it can handle further evolutions of the Nova API
 without repeating this dilemma tomorrow. V2 or V3, we would stick to it
 for the foreseeable future.

 Between the cleanup of the API, the drop of XML support, and including a
 sane mechanism for supporting further changes without major bumps of the
 API, we may have enough to technically justify v3 at this point. However
 from a user standpoint, given the surface of the API, it can't be
 deprecated fast -- so this ideal solution only works in a world with
 infinite maintenance resources.

 Keeping V2 forever is more like a trade-off, taking into account the
 available maintenance resources and the reality of Nova's API huge
 surface. It's less satisfying technically, especially if you're deeply
 aware of the API incoherent bits, and the prospect of living with some
 of this incoherence forever is not really appealing.
 
 What is the maintenance cost for keeping both APIs?
 I think Chris and his team have already paid most part of it, the
 works for porting
 the existing v2 APIs to v3 APIs is almost done.
 So I'd like to clarify the maintenance cost we are discussing.
 
 If the cost means that we should implement both API methods when creating a
 new API, how about implementing internal proxy from v2 to v3 API?
 When creating a new API, it is enough to implement API method for v3 API. and
 when receiving a v2 request, Nova translates it to v3 API.
 The request styles(url, body) of v2 and v3 are different and this idea makes 
 new
 v2 APIs v3 style. but now v2 API has already a lot of inconsistencies.
 so it does not seem so big problem.
 
 
 From the viewpoint of OpenStack interoperability also, I believe we
 need a new API.
 Many v2 API parameters are not validated. If implementing strict
 validation for v2 API,
 incompatibility issues happen. That is why we are implementing input
 validation for
 v3 API. If staying v2 API forever, we should have this kind of problem 
 forever.
 v2 API is fragile now. So the interoperability should depend on v2
 API, that seems
 sandbox.. (I know that it is a little overstatement, but we have found
 a lot of this kind
 of problem already..)

So I think this remains a good question about what keeping v2 forever
means. Because it does mean keeping the fact that we don't validate
input at the surface and depend on database specific errors to trickle
back up correctly. So if MySQL changes how it handles certain things,
you'll get different errors on the surface.

I'm gong to non-sequitor for a minute, because I think it's important to
step back some times.

What I want out of Nova API at the end of the day:

1. a way to discover what the API is

because this massively simplifies writing clients, SDKs, tests, and
documentation. All those pipelines are terribly manual, and have errors
in them because of it. Like has been said before you actually need to
read the Nova source code to figure out how to use parts of the API.

I think this is a great example of that -
https://blog.heroku.com/archives/2014/1/8/json_schema_for_heroku_platform_api?utm_source=newsletterutm_medium=emailutm_campaign=januarynewslettermkt_tok=3RkMMJWWfF9wsRonuKzNZKXonjHpfsX57OQtX6SxlMI%2F0ER3fOvrPUfGjI4AScJrI%2BSLDwEYGJlv6SgFQrjAMapmyLgLUhE%3D

2. stop being optional

If we ever want interoperability between Nova implementations we need to
stop allowing the API to be optional. That means getting rid of
extensions. Content is either part of the Nova API, or it isn't in the
tree. Without this we'll never get an ecosystem around the API because
anything more complicated than basic server lifecycle is not guarunteed
to exist in an OpenStack implementation.

Extensions thus far have largely just been used as a cheat to get around
API compatibility changes based on the theory that users could list
extensions to figure out what the API would look like. It's a bad
theory, and not even nova command line does this. So users will get
errors on nova cli with clouds because features aren't enabled, and the
user has no idea why their commands don't work. Because it's right there
in the nova help.

3. a solid validation surface

We really need to be far more defensive on our API validation surface.
Right 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
 I thought micro versioning was so we could make backwards compatible changes.
 If we make breaking changes we need to support the old and the new for
 a little while.

Adding a new field alongside an old one in a structure that we return is
not a breaking change to me, IMHO. We can clean up the datestamp formats
that we return in that way: return both.

For datestamps that the client passes (are there any of these?) we don't
have to honor both and do conflict resolution if they disagree, we just
honor the new one, clearly.

 For return values:
 * get new clients to send Accepts headers, to version the response
 * this amounts to the major version
 * for those request the new format, they get the new format
 * for those getting the old format, they get the old format

Yes, I think this is quite reasonable. I honestly think that in most
cases we can avoid *ever* breaking the original return format without
the code turning into a mess, and I think that's what we should shoot for.

 We could port the V2 classes over to the V3 code, to get the code benefits.

Yes.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
 I think we need to find an alternative way to support the new and old
 formats, like Accepts Headers, and retro-fitting a version to
 extensions so we can easily advertise new attributes, to those parsers
 that will break when they encounter those kinds of things.

Agreed.

 Now I am tempted to say we morph the V3 code to also produce the V2
 responses. And change the v3 API, so thats easier to do, and easier
 for clients to move (like don't change URLs unless we really have to).
 I know the risk for screwing that up is enormous, but maybe that makes
 the most sense?

It seems far easier to port the architectural awesomeness of v3 to v2 in
terms of code organization (which can be done without altering the
format), and then start extending v2 to support new formats that we
want. Trying to take a thing with a thousand small changes and add
support to optionally not send those small changes seems harder to me
than adding the important ones into v2. It will also help us revisit
what changes we want to make, and hopefully we would reconsider taking
on the versioning pain of a bunch of CamelCase changes :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova
I guess that's simple and that's why nice solution for this problem.

So you propose to implement that feature in following way:
1) mark project as 'reservable' during its creation in extras specs
2) add some more logic to reservable resources creation/reservation. Like
adding one more checking in instance creation request. Currently we're
checking hints in request, you propose to check project extra info and
if project is 'reservable' you'll use smth like default_reservation stuff
for instances

Although it looks ok (because of no changes to Keystone/Nova/etc. core
code), I have some question about this solution:
- info about project should be given only to admins, really. But all these
VMs will be booted by simple users, am I right? In this case you'll have no
possibility to get info about project and to process checking.

Do you have some ideas about how to solve this problem?

Dina



On Tue, Feb 25, 2014 at 6:22 PM, Martinez, Christian 
christian.marti...@intel.com wrote:

 Yeah.. That could be a good approach.
 Just add some extra info to the tenant creation.. Then, on climate nova,
 when a resource is created, check by the tenant id if that tenant has a
 lease param (or smth like that). If it does, then act accordingly..
 Is that make sense? Dina, Sylvain ??


 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: Thursday, February 20, 2014 4:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

 I agree with Bauza that the main purpose of Climate is to reserve
 resources, and in the case of keystone it should reserve tenant, users,
 domains, etc.

 So, it could be possible that climate is not the module in which the
 tenant lease information should be saved. As stated in the use case, the
 only purpose of this BP is to allow the creation of tenants with start and
 end dates. Then when creating resources in that tenant (like VMs) climate
 could take lease information from the tenant itself and create actual
 leases for the VMs.

 Any thoughts of this?

 From: Sylvain Bauza sylvain.ba...@gmail.commailto:
 sylvain.ba...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Date: jueves, 20 de febrero de 2014 15:57
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design




 2014-02-20 19:32 GMT+01:00 Dina Belova dbel...@mirantis.commailto:
 dbel...@mirantis.com:
 Sylvain, as I understand in BP description, Christian is about not exactly
 reserving tenants itself like we actually do with VMs/hosts - it's just
 naming for that. I think he is about two moments:

 1) mark some tenants as needed to be reserved - speaking about resources
 assigned to it
 2) reserve these resources via Climate (VMs for first approximation)


 Well, I understood your BP, that's Christian's message which was a bit
 misunderstanding.
 Speaking of marking a tenant as reserved would then mean that it does
 have kind of priority vs. another tenant. But again, at said, how could you
 ensure at the marking (ie. at lease creation) that Climate can honor
 contracts with resources that haven't been explicitely defined ?

 I suppose Christian is speaking now about hacking tenants creation process
 to mark them as needed to be reserved (1st step).


 Again, a lease is mutually and exclusively linked with explicit resources.
 If you say create a lease, for the love without speaking of what, I don't
 see the interest in Climate, unless I missed something obvious.

 -Sylvain
 Christian, correct me if I'm wrong, please Waiting for your comments


 On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza sylvain.ba...@gmail.com
 mailto:sylvain.ba...@gmail.com wrote:
 Hi Christian,

 2014-02-20 18:10 GMT+01:00 Martinez, Christian 
 christian.marti...@intel.commailto:christian.marti...@intel.com:

 Hello all,
 I'm working in the following BP:
 https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
 in which the idea is to have the possibility to create special tenants
 that have a lease for all of its associated resources.

 The BP is in discussing phase and we were having conversations on IRC
 about what approach should we follow.


 Before speaking about implementation,  I would definitely know the
 usecases you want to design.
 What kind of resources do you want to provision using Climate ? The basic
 thing is, what is the rationale thinking about hooking tenant creation ?
 Could you please be more explicit ?

 At the tenant creation, Climate wouldn't have no information in terms of
 calculating the resources asked, because the resources wouldn't have been
 allocated before. So, generating a lease on top of this would be like a
 

Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sylvain Bauza
2014-02-25 15:38 GMT+01:00 Dina Belova dbel...@mirantis.com:

 I guess that's simple and that's why nice solution for this problem.

 So you propose to implement that feature in following way:
 1) mark project as 'reservable' during its creation in extras specs
 2) add some more logic to reservable resources creation/reservation. Like
 adding one more checking in instance creation request. Currently we're
 checking hints in request, you propose to check project extra info and
 if project is 'reservable' you'll use smth like default_reservation stuff
 for instances

 Although it looks ok (because of no changes to Keystone/Nova/etc. core
 code), I have some question about this solution:
 - info about project should be given only to admins, really. But all these
 VMs will be booted by simple users, am I right? In this case you'll have no
 possibility to get info about project and to process checking.

 Do you have some ideas about how to solve this problem?

 Dina



Why should it require to be part of Keystone to hook up on Climate ?
Provided we consider some projects as 'reservable', we could say this
should be a Climate API endpoint like CRUD /project/ and up to the admin
responsability to populate it.

If we say that new projects should automatically be 'reservable', that's
only policy from Climate to whiteboard these.

Provided a VM is booted by a single end-user, that would still be Climate's
responsability to verify that the user's tenant has been previously granted.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova

 Why should it require to be part of Keystone to hook up on Climate ?


Sorry, can't get your point.

Provided we consider some projects as 'reservable', we could say this
 should be a Climate API endpoint like CRUD /project/ and up to the admin
 responsability to populate it.
 If we say that new projects should automatically be 'reservable', that's
 only policy from Climate to whiteboard these.


So you propose to make some API requests to Climate (like for hosts) and
mark some already existing projects as reserved. But how we'll automate
process of some resource reservation belonging to that tenant? Or do you
propose still to add some checkings to, for example, climate-nova
extensions to check this somehow there?

Thanks


On Tue, Feb 25, 2014 at 6:48 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:




 2014-02-25 15:38 GMT+01:00 Dina Belova dbel...@mirantis.com:

 I guess that's simple and that's why nice solution for this problem.

 So you propose to implement that feature in following way:
 1) mark project as 'reservable' during its creation in extras specs
 2) add some more logic to reservable resources creation/reservation. Like
 adding one more checking in instance creation request. Currently we're
 checking hints in request, you propose to check project extra info and
 if project is 'reservable' you'll use smth like default_reservation stuff
 for instances

 Although it looks ok (because of no changes to Keystone/Nova/etc. core
 code), I have some question about this solution:
 - info about project should be given only to admins, really. But all
 these VMs will be booted by simple users, am I right? In this case you'll
 have no possibility to get info about project and to process checking.

 Do you have some ideas about how to solve this problem?

 Dina



 Why should it require to be part of Keystone to hook up on Climate ?
 Provided we consider some projects as 'reservable', we could say this
 should be a Climate API endpoint like CRUD /project/ and up to the admin
 responsability to populate it.

 If we say that new projects should automatically be 'reservable', that's
 only policy from Climate to whiteboard these.

 Provided a VM is booted by a single end-user, that would still be
 Climate's responsability to verify that the user's tenant has been
 previously granted.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-v meeting

2014-02-25 Thread Peter Pouliot
Hi everyone,

I'm traveling this week and we'll need to cancel the meeting for today.

P
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sanchez, Cristian A

Provided we consider some projects as 'reservable', we could say this should be 
a Climate API endpoint like CRUD /project/ and up to the admin responsability 
to populate it.
If we say that new projects should automatically be 'reservable', that's only 
policy from Climate to whiteboard these.

So you propose to make some API requests to Climate (like for hosts) and mark 
some already existing projects as reserved. But how we'll automate process of 
some resource reservation belonging to that tenant? Or do you propose still to 
add some checkings to, for example, climate-nova extensions to check this 
somehow there?

I guess that even with this approach, the reservation creation process will 
still check for the existence of ‘lease’ information in the project, and create 
the lease accordingly.


From: Dina Belova dbel...@mirantis.commailto:dbel...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: martes, 25 de febrero de 2014 12:25
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

Why should it require to be part of Keystone to hook up on Climate ?

Sorry, can't get your point.

Provided we consider some projects as 'reservable', we could say this should be 
a Climate API endpoint like CRUD /project/ and up to the admin responsability 
to populate it.
If we say that new projects should automatically be 'reservable', that's only 
policy from Climate to whiteboard these.

So you propose to make some API requests to Climate (like for hosts) and mark 
some already existing projects as reserved. But how we'll automate process of 
some resource reservation belonging to that tenant? Or do you propose still to 
add some checkings to, for example, climate-nova extensions to check this 
somehow there?



Thanks


On Tue, Feb 25, 2014 at 6:48 PM, Sylvain Bauza 
sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com wrote:



2014-02-25 15:38 GMT+01:00 Dina Belova 
dbel...@mirantis.commailto:dbel...@mirantis.com:

I guess that's simple and that's why nice solution for this problem.

So you propose to implement that feature in following way:
1) mark project as 'reservable' during its creation in extras specs
2) add some more logic to reservable resources creation/reservation. Like 
adding one more checking in instance creation request. Currently we're checking 
hints in request, you propose to check project extra info and if project is 
'reservable' you'll use smth like default_reservation stuff for instances

Although it looks ok (because of no changes to Keystone/Nova/etc. core code), I 
have some question about this solution:
- info about project should be given only to admins, really. But all these VMs 
will be booted by simple users, am I right? In this case you'll have no 
possibility to get info about project and to process checking.

Do you have some ideas about how to solve this problem?

Dina



Why should it require to be part of Keystone to hook up on Climate ?
Provided we consider some projects as 'reservable', we could say this should be 
a Climate API endpoint like CRUD /project/ and up to the admin responsability 
to populate it.

If we say that new projects should automatically be 'reservable', that's only 
policy from Climate to whiteboard these.

Provided a VM is booted by a single end-user, that would still be Climate's 
responsability to verify that the user's tenant has been previously granted.

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why doesn't _rollback_live_migration() always call rollback_live_migration_at_destination()?

2014-02-25 Thread Chris Friesen

On 02/25/2014 05:15 AM, John Garbutt wrote:

On 24 February 2014 22:14, Chris Friesen chris.frie...@windriver.com wrote:




What happens if we have a shared-storage instance that we try to migrate and
fail and end up rolling back?  Are we going to end up with messed-up
networking on the destination host because we never actually cleaned it up?


I had some WIP code up to clean that up, as part as the move to
conductor, its massively confusing right now.

Looks like a bug to me.

I suspect the real issue is that some parts of:
self.driver.rollback_live_migration_at_destination(ontext, instance,
 network_info, block_device_info)
Need more information about if there is shared storage being used or not.


What's the timeframe on the move to conductor?

I'm looking at fixing up the resource tracking over a live migration 
(currently we just rely on the audit fixing things up whenever it gets 
around to running) but to make that work properly I need to 
unconditionally run rollback code on the destination.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sylvain Bauza
2014-02-25 16:25 GMT+01:00 Dina Belova dbel...@mirantis.com:

 Why should it require to be part of Keystone to hook up on Climate ?


 Sorry, can't get your point.



I'm just asking why we should hack Keystone workflow by adding an hook,
like we did for Nova. From my POV, that's not worth it.



 Provided we consider some projects as 'reservable', we could say this
 should be a Climate API endpoint like CRUD /project/ and up to the admin
 responsability to populate it.
 If we say that new projects should automatically be 'reservable', that's
 only policy from Climate to whiteboard these.


 So you propose to make some API requests to Climate (like for hosts) and
 mark some already existing projects as reserved. But how we'll automate
 process of some resource reservation belonging to that tenant? Or do you
 propose still to add some checkings to, for example, climate-nova
 extensions to check this somehow there?

 Thanks



I think it should be a Climate policy (be careful, the name is confusing)
: if admin wants to grant any new project for reservations, he should place
a call to Climate. That's up to Climate-Nova (ie. Nova extension) to query
Climate in order to see if project has been granted or not.

Conceptually, this 'reservation' information is tied to Climate and should
not be present within the projects.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Sławek Kapłoński

Hello,

I have question to You guys. Can someone explain me (or send to link 
with such explanation) how exactly ML2 plugin which is working on 
neutron server is communicating with compute hosts with openvswitch 
agents? I suppose that this is working with rabbitmq queues but I need 
to add own function which will be called in this agent and I don't know 
how to do that. It would be perfect if such think will be possible with 
writing for example new mechanical driver in ML2 plugin (but how?). 
Thanks in advance for any help from You :)


--
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-25 Thread Ben Nemec
 

On 2014-02-25 08:19, trinath.soman...@freescale.com wrote: 

 Hi Stackers- 
 
 When I configured Jenkins to run the Sandbox tempest testing, While devstack 
 is running, 
 
 I have seen error 
 
 ERROR: Invalid Openstack Nova credentials 
 
 and another error 
 
 ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries 
 exceeded wuth url: /v2/91dd….(caused by class 'socket.error': [Errno 111] 
 Connection refused) 
 
 I feel devstack automates the openstack environment. 
 
 Kindly guide me resolve the issue. 
 
 Thanks in advance. 
 
 -- 
 
 Trinath Somanchi - B39208 
 
 trinath.soman...@freescale.com | extn: 4048

Those are both symptoms of an underlying problem. It sounds like a
service didn't start or wasn't configured correctly, but it's impossible
to say for sure what went wrong based on this information. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] can't set rules in the common policy

2014-02-25 Thread Ben Nemec
 

On 2014-02-25 02:00, Tian, Shuangtai wrote: 

 Hi, Stackers 
 
 When I init a Enforcer class with rules, I find the rules will rewrite by 
 configure policy rules, because the policy file is modified , 
 
 the load rules always try to load the rules from the cache or configure file 
 when checks the policy in the enforce function, and 
 
 force to rewrite the rules always using the configure policy. 
 
 I think this problem also exists when we use the set_rules to set rules 
 before we use the enforce to load rules in the first time. 
 
 Anyone also meets this problem, or if the way I used is wrong? I proposed a 
 patch to this problem : https://review.openstack.org/#/c/72848/ [1] 
 
 Best regards, 
 
 Tian, Shuangtai

 I don't think you're doing anything wrong. You can see I worked around
the same issue in the test cases when I was working on the Oslo parallel
testing:
https://review.openstack.org/#/c/70483/1/tests/unit/test_policy.py

Your proposed change looks reasonable to me. I'd probably like to see it
used to remove some of the less pleasant parts of my change, but I'll
leave detailed feedback on the review.

Thanks!

-Ben

 

Links:
--
[1] https://review.openstack.org/#/c/72848/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-25 Thread Denis Makogon
Hi, Trinath.

Ideal solution is to rebuild your dev. environnment.

But for the future discussions and questions please use IRC #openstack-dev
to ask any
question about setting up development environment (devstack).

Best regards,
Denis Makogon.




On Tue, Feb 25, 2014 at 5:53 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-02-25 08:19, trinath.soman...@freescale.com wrote:

  Hi Stackers-

 When I configured Jenkins to run the Sandbox tempest testing, While
 devstack is running,

 I have seen error

 ERROR: Invalid Openstack Nova credentials

 and another error

 ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries
 exceeded wuth url: /v2/91dd(caused by class 'socket.error': [Errno 111]
 Connection refused)

 I feel devstack automates the openstack environment.

 Kindly guide me resolve the issue.

 Thanks in advance.

 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048

  Those are both symptoms of an underlying problem.  It sounds like a
 service didn't start or wasn't configured correctly, but it's impossible to
 say for sure what went wrong based on this information.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-25 Thread yunhong jiang
On Tue, 2014-02-25 at 03:05 +, Liuji (Jeremy) wrote:
 Now that USB devices are used so widely in private/hybrid cloud like
 used as USB key, and there are no technical issues in libvirt/qemu.
 I think it a valuable feature in openstack.

USB key is an interesting scenario. I assume the USB key is just for
some specific VM, wondering how the admin/user know which usb disk to
which VM?

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-25 Thread yunhong jiang
On Tue, 2014-02-25 at 10:45 +, John Garbutt wrote:
 
 As a heads up, the overheads of DB calls turned out to dwarf any
 algorithmic improvements I managed. There will clearly be some RPC
 overhead, but it didn't stand out as much as the DB issue.
 
 The move to conductor work should certainly stop the scheduler making
 those pesky DB calls to update the nova instance. And then,
 improvements like no-db-scheduler and improvements to scheduling
 algorithms should shine through much more.
 
Although DB access is sure the key for performance, but do we really
want to pursue conductor-based scheduler?

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova
Ok, so

 I'm just asking why we should hack Keystone workflow by adding an hook,
like we did for Nova. From my POV, that's not worth it.

Idea was about some extra specs, that will be processed by Climate anyway.
Keystone will know nothing about reservations or smth.

 I think it should be a Climate policy (be careful, the name is
confusing) : if admin wants to grant any new project for reservations, he
should place a call to Climate. That's up to Climate-Nova (ie. Nova
extension) to query Climate in order to see if project has been granted or
not.

Now I think that it'll be better, yes.
I see some workflow like:

1) Mark project as reservable in Climate
2) When some resource is created (like Nova instance) it should be checked
(in the API extensions, for example) via Climate if project is reservable.
If is, and there is no special reservation flags passed, it should be used
default_reservation stuff for this instance

Sylvain, is that ira you're talking about?

Dina



On Tue, Feb 25, 2014 at 7:53 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:




 2014-02-25 16:25 GMT+01:00 Dina Belova dbel...@mirantis.com:

  Why should it require to be part of Keystone to hook up on Climate ?


 Sorry, can't get your point.



 I'm just asking why we should hack Keystone workflow by adding an hook,
 like we did for Nova. From my POV, that's not worth it.



 Provided we consider some projects as 'reservable', we could say this
 should be a Climate API endpoint like CRUD /project/ and up to the admin
 responsability to populate it.
 If we say that new projects should automatically be 'reservable', that's
 only policy from Climate to whiteboard these.


 So you propose to make some API requests to Climate (like for hosts) and
 mark some already existing projects as reserved. But how we'll automate
 process of some resource reservation belonging to that tenant? Or do you
 propose still to add some checkings to, for example, climate-nova
 extensions to check this somehow there?

 Thanks



 I think it should be a Climate policy (be careful, the name is
 confusing) : if admin wants to grant any new project for reservations, he
 should place a call to Climate. That's up to Climate-Nova (ie. Nova
 extension) to query Climate in order to see if project has been granted or
 not.

 Conceptually, this 'reservation' information is tied to Climate and should
 not be present within the projects.

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GSoC 2014

2014-02-25 Thread Andrew Chul
Hi, guys! My name is Andrew Chul, I'm from Russia. I had graduated National
Research University Moscow Power Engineering Institute a few years ago.
And then I've started to getting post-graduated education in Smolensk
University of Humanities.

The time for filing of an application for participating in projects is
coming and I'm looking forward of 10th March. I've seen your project in the
list of organizations which will take part in Google Summer of Code 2014.
And I need to say that my eyes exploded interest to your project. Why? I
dreamed about such project. I'm very interesting in such areas, as machine
learning, artificial intelligence. And, primarily, I'm developer on php,
but active develop myself in Python.

So, 10th March will coming soon and I will fill an application to
participating in your project. I hope that I will be able to work side by
side with you in such interesting and cognitive project. Thank you for
attention.

-- 
Best regards, Andrew Chul.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread trinath.soman...@freescale.com
Hi

Hope this helps

http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron

___

Trinath Somanchi

_
From: Sławek Kapłoński [sla...@kaplonski.pl]
Sent: Tuesday, February 25, 2014 9:24 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Neutron ML2 and openvswitch agent

Hello,

I have question to You guys. Can someone explain me (or send to link
with such explanation) how exactly ML2 plugin which is working on
neutron server is communicating with compute hosts with openvswitch
agents? I suppose that this is working with rabbitmq queues but I need
to add own function which will be called in this agent and I don't know
how to do that. It would be perfect if such think will be possible with
writing for example new mechanical driver in ML2 plugin (but how?).
Thanks in advance for any help from You :)

--
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] IRC meeting moved

2014-02-25 Thread Tim Hinrichs
Hi all,

Last week we moved the Congress IRC meeting time to every other Tuesday at 
17:00 UTC in #openstack-meeting-3 (that's an hour earlier than it was 
previously).  But we neglected to mail out the new time, and it doesn't look 
like anyone remembered the time change.  So I'll hang around at both our old 
and new time slots this week.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Assaf Muller


- Original Message -
 Hi
 
 Hope this helps
 
 http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
 
 ___
 
 Trinath Somanchi
 
 _
 From: Sławek Kapłoński [sla...@kaplonski.pl]
 Sent: Tuesday, February 25, 2014 9:24 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Neutron ML2 and openvswitch agent
 
 Hello,
 
 I have question to You guys. Can someone explain me (or send to link
 with such explanation) how exactly ML2 plugin which is working on
 neutron server is communicating with compute hosts with openvswitch
 agents?

Maybe this will set you on your way:
ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then uses
ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with the topic
stated in that file.

When the message is received by the OVS agent, it calls:
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_update.

 I suppose that this is working with rabbitmq queues but I need
 to add own function which will be called in this agent and I don't know
 how to do that. It would be perfect if such think will be possible with
 writing for example new mechanical driver in ML2 plugin (but how?).
 Thanks in advance for any help from You :)
 
 --
 Best regards
 Slawek Kaplonski
 sla...@kaplonski.pl
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party-ci] Proposing a regular workshop/meeting to help folks set up CI environments

2014-02-25 Thread Jay Pipes
Hi Stackers,

I've been contacted by a number of folks with questions about setting up
a third-party CI system, and while I'm very happy to help anyone who
contacts me, I figured it would be a good idea to have a regular meeting
on Google Hangouts that would be used as a QA session or workshop for
folks struggling to set up their own environments.

I think Google Hangouts are ideal because we can share our screens (yes,
even on Linux systems) and get real-time feedback to the folks who have
questions.

I propose we have the first weekly meeting this coming Monday, March
3rd, at 10:00 EST (07:00 PST, 15:00 UTC).

I created a Googe Hangout Event here:

http://bit.ly/1cLVnkv

Feel free to sign up for the event by selecting Yes in the Are you
going? dropdown.

If Google Hangouts works well for this first week, we'll use it again.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-25 Thread Édouard Thuleau
A thread [1] was also initiated on the ML by Syvlain but no answers/comment
for the moment.

[1] http://openstack.markmail.org/thread/qy6ikldtq2o4imzl

Édouard.


On Mon, Feb 24, 2014 at 9:35 AM, 黎林果 lilinguo8...@gmail.com wrote:

 Thanks you very much.

 IMHO when admin_state_up is false that entity should be down, meaning
 network should be down.
 otherwise what it the usage of admin_state_up ? same is true for port
 admin_state_up

 It likes switch's power button?

 2014-02-24 16:03 GMT+08:00 Assaf Muller amul...@redhat.com:
 
 
  - Original Message -
  Hi,
 
  I want to know the admin_state_up attribute about networks but I
  have not found any describes.
 
  Can you help me to understand it? Thank you very much.
 
 
  There's a discussion about this in this bug [1].
  From what I gather, nobody knows what admin_state_up is actually supposed
  to do with respect to networks.
 
  [1] https://bugs.launchpad.net/neutron/+bug/1237807
 
 
  Regard,
 
  Lee Li
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

2014-02-25 Thread Dugger, Donald D
Sylvain-

Good point and, since you drove the discussion, we did talk about it.  For 
those that weren't there on IRC the log is at:

http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-02-25-15.00.log.html

and the etherpad where we are collecting the BPs (don't be daunted by the size 
of the etherpad, the good stuff is at the bottom) is at:

https://etherpad.openstack.org/p/icehouse-external-scheduler

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Tuesday, February 25, 2014 6:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

Hi Don,

Maybe it would be worth discussing on how we could share the blueprints with 
people willing to help ?

-Sylvain

2014-02-24 18:08 GMT+01:00 Dugger, Donald D 
donald.d.dug...@intel.commailto:donald.d.dug...@intel.com:
All-

I'm tempted to cancel the gantt meeting for tomorrow.  The only topics I have 
are the no-db scheduler update (we can probably do that via email) and the 
gantt code forklift (I've been out with the flu and there's no progress on 
that).

I'm willing to chair but I'd like to have some specific topics to talk about.

Suggestions anyone?

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786tel:303%2F443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sanchez, Cristian A
+1 to Dina on the workflow

From: Dina Belova dbel...@mirantis.commailto:dbel...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: martes, 25 de febrero de 2014 13:42
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

Ok, so

 I'm just asking why we should hack Keystone workflow by adding an hook, 
 like we did for Nova. From my POV, that's not worth it.

Idea was about some extra specs, that will be processed by Climate anyway. 
Keystone will know nothing about reservations or smth.

 I think it should be a Climate policy (be careful, the name is confusing) 
 : if admin wants to grant any new project for reservations, he should place 
 a call to Climate. That's up to Climate-Nova (ie. Nova extension) to query 
 Climate in order to see if project has been granted or not.

Now I think that it'll be better, yes.
I see some workflow like:

1) Mark project as reservable in Climate
2) When some resource is created (like Nova instance) it should be checked (in 
the API extensions, for example) via Climate if project is reservable. If is, 
and there is no special reservation flags passed, it should be used 
default_reservation stuff for this instance

Sylvain, is that ira you're talking about?

Dina



On Tue, Feb 25, 2014 at 7:53 PM, Sylvain Bauza 
sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com wrote:



2014-02-25 16:25 GMT+01:00 Dina Belova 
dbel...@mirantis.commailto:dbel...@mirantis.com:

Why should it require to be part of Keystone to hook up on Climate ?

Sorry, can't get your point.



I'm just asking why we should hack Keystone workflow by adding an hook, like we 
did for Nova. From my POV, that's not worth it.


Provided we consider some projects as 'reservable', we could say this should be 
a Climate API endpoint like CRUD /project/ and up to the admin responsability 
to populate it.
If we say that new projects should automatically be 'reservable', that's only 
policy from Climate to whiteboard these.

So you propose to make some API requests to Climate (like for hosts) and mark 
some already existing projects as reserved. But how we'll automate process of 
some resource reservation belonging to that tenant? Or do you propose still to 
add some checkings to, for example, climate-nova extensions to check this 
somehow there?

Thanks



I think it should be a Climate policy (be careful, the name is confusing) : 
if admin wants to grant any new project for reservations, he should place a 
call to Climate. That's up to Climate-Nova (ie. Nova extension) to query 
Climate in order to see if project has been granted or not.

Conceptually, this 'reservation' information is tied to Climate and should not 
be present within the projects.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GSoC 2014

2014-02-25 Thread Davanum Srinivas
Andrew,

Please see some details regarding projects/ideas/mentors etc in our
wiki - https://wiki.openstack.org/wiki/GSoC2014 You can also talk to
some of us on #openstack-gsoc irc channel.

-- dims

On Tue, Feb 25, 2014 at 11:55 AM, Andrew Chul andymitr...@gmail.com wrote:
 Hi, guys! My name is Andrew Chul, I'm from Russia. I had graduated National
 Research University Moscow Power Engineering Institute a few years ago.
 And then I've started to getting post-graduated education in Smolensk
 University of Humanities.


 The time for filing of an application for participating in projects is
 coming and I'm looking forward of 10th March. I've seen your project in the
 list of organizations which will take part in Google Summer of Code 2014.
 And I need to say that my eyes exploded interest to your project. Why? I
 dreamed about such project. I'm very interesting in such areas, as machine
 learning, artificial intelligence. And, primarily, I'm developer on php, but
 active develop myself in Python.


 So, 10th March will coming soon and I will fill an application to
 participating in your project. I hope that I will be able to work side by
 side with you in such interesting and cognitive project. Thank you for
 attention.


 --
 Best regards, Andrew Chul.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Doug Hellmann
OK, that's not how that feature is meant to be used.

The idea is that on application startup plugins or extensions will be
loaded that configure the extra attributes for the class. That happens one
time, and the configuration does not depend on data that appears in the
request itself.

Doug


On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Let me give you a bit of code then, that's currently WIP with heavy
 rewrites planned on the Controller side thanks to Pecan hooks [1]

 So, L102 (GET request) the convert() method is passing the result dict as
 kwargs, where the Host.__init__() method is adding dynamic attributes.
 That does work :-)

 L108, I'm specifying that my body string is basically an Host object.
 Unfortunately, I can provide extra keys to that where I expect to be extra
 attributes. WSME will then convert the body into an Host [2], but as the
 Host class doesn't yet know which extra attributes are allowed, none of my
 extra keys are taken.
 As a result, the 'host' (instance of Host) argument of the post() method
 is not containing the extra attributes and thus, not passed for creation to
 my Manager.

 As said, I can still get the request body using Pecan directly within the
 post() method, but I then would have to manage the mimetype, and do the
 adding of the extra attributes there. That's pretty ugly IMHO.

 Thanks,
 -Sylvain

 [1] http://paste.openstack.org/show/69418/

 [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71


 2014-02-25 14:39 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:




 On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza 
 sylvain.ba...@gmail.comwrote:

 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes to
 a Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to WSME
 to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is registered
 at API startup, not when being called, so the get_arg() method fails to
 fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.


 I'm not sure I understand the question. Are you saying that the dynamic
 type feature works for GET arguments but not POST body content?

 Doug





 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community meeting minutes - 02/25/2014

2014-02-25 Thread Alexander Tivelkov
Hi,

Thanks for joining murano weekly meeting.
Here are the meeting minutes and the logs:

http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-02-25-17.00.html
http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-02-25-17.00.log.html

See you next week!

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Eugene!

Responses inline:

On Tue, Feb 25, 2014 at 3:33 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 I'm really not sure what Mark McClain on some other folks see as
 implementation details. To me the 'instance' concept is as logical as
 others (vips/pool/etc). But anyway, it looks like majority of those who
 discuss, sees it as redundant concept.


Maybe we should have a discussion around what qualifies as a 'logical
concept' or 'logical construct,' and why the 'loadbalancer' concept you've
been championing either does or does not qualify, so we're all (closer to
being) on the same page before we discuss model changes?



 Agree, however actual hardware is beyond logical LBaaS API but could be a
 part of admin LBaaS API.


Aah yes--  In my opinion, users should almost never be exposed to anything
that represents a specific piece of hardware, but cloud administrators must
be. The logical constructs the user is exposed to can come close to what
an actual piece of hardware is, but again, we should be abstract enough
that a cloud admin can swap out one piece of hardware for another without
affecting the user's workflow, application configuration, (hopefully)
availability, etc.

I recall you said previously that the concept of having an 'admin API' had
been discussed earlier, but I forget the resolution behind this (if there
was one). Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well
as some of the core features it's going to need, and contrast this with the
user API (which I think is mostly what Jay and Mark McClain are rightly
concerned about), it'll start to become obvious which features belong
where, and what kind of data model will emerge which supports both APIs.


Thanks,
Stephen



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-25 Thread Ben Nemec

On 2014-02-24 14:51, Sean Dague wrote:

On 02/24/2014 03:10 PM, Ben Nemec wrote:

On 2014-02-21 17:09, Sean Dague wrote:

On 02/21/2014 05:28 PM, Clark Boylan wrote:

On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec openst...@nemebean.com
wrote:

On 2014-02-21 13:01, Mike Spreitzer wrote:

https://bugs.launchpad.net/devstack/+bug/1203680 is literally about
Glance
but Nova has the same problem.  There is a fix released, but just
merging
that fix accomplishes nothing --- we need people who run DevStack 
to

set the
new variable (INSTALL_TESTONLY_PACKAGES).  This is something that
needs to
be documented (in http://devstack.org/configuration.html and all 
the

places
that tell people how to do unit testing, for examples), so that
people know
to do it, right?



IMHO, that should be enabled by default.  Every developer using
devstack is
going to want to run unit tests at some point (or should 
anyway...),

and if
the gate doesn't want the extra install time for something like
tempest that
probably doesn't need these packages, then it's much simpler to
disable it
in that one config instead of every separate config used by every
developer.

-Ben



I would be wary of relying on devstack to configure your unittest
environments. Just like it takes over the node you run it on, 
devstack

takes full ownership of the repos it clones and will do potentially
lossy things like `git reset --hard` when you don't expect it to. +1
to documenting the requirements for unittesting, not sure I would
include devstack in that documentation.


Agreed, I never run unit tests in the devstack tree. I run them on my
laptop or other non dedicated computers. That's why we do unit tests 
in

virtual envs, they don't need a full environment.

Also many of the unit tests can't be run when openstack services are
actually running, because they try to bind to ports that openstack
services use.

It's one of the reasons I've never considered that path a priority in
devstack.

-Sean



What is the point of devstack if we can't use it for development?


I builds you a consistent cloud.


Are
we really telling people that they shouldn't be altering the code in
/opt/stack because it's owned by devstack, and devstack reserves the
right to blow it away any time it feels the urge?


Actually, I tell people that all that time. Most of them don't listen 
to

me. :)

Devstack defaults to RECLONE=False, but that tends to break people in
other ways (like having month old trees they are building against). But
the reality is I've watched tons of people have their work reset on 
them

because they were developing in /opt/stack, so I tell people don't do
that (and if they do it anyway, at least they realize it's dangerous).


How would you feel about doing a git stash before doing reclones?  
Granted, that still requires people to know that the changes were 
stashed, but at least if someone reclones, loses their changes, and 
freaks out on #openstack-dev or something we can tell them how to get 
the changes back. :-)





And if that's not
what we're saying, aren't they going to want to run unit tests before
they push their changes from /opt/stack?  I don't think it's 
reasonable
to tell them that they have to copy their code to another system to 
run

unit tests on it.


Devstack can clone from alternate sources, and that's my approach on
anything long running. For instance, keeping trees in ~/code/ and 
adjust

localrc to use those trees/branches that I'm using (with the added
benefit of being able to easily reclone the rest of the tree).

Lots of people use devstack + vagrant, and do basically the same thing
with their laptop repos being mounted up into the guest.


So is there some git magic that also keeps the repos in sync, or do you 
have to commit/pull/restart service every time you make changes?  I ask 
because experience tells me I would inevitably forget one of those steps 
at some point and be stymied by old code still running in my devstack.  
Heck, I occasionally forget just the restart service step. ;-)




And some people do it the way you are suggesting above.

The point is, for better or worse, what we have is a set of tools from
which you can assemble a workflow that suits your needs. We don't have 
a

prescribed this is the one way to develop approach. There is some
assumption that you'll pull together something from the tools provided.

-Sean


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-25 Thread Clark Boylan
On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The problem with the prefix approach is it doesn't scale. At some
point we will decide we need a new prefix then a third and so on
(which is basically what happened at the schema level). That said we
recently switched to using single use slaves for all unittesting so I
think we can safely GRANT ALL on *.* to openstack_citest@localhost and
call that good enough. This should work fine for upstream testing but
may not be super friendly to others using the puppet manifests on
permanent slaves. We can wrap the GRANT in a condition in puppet that
is set only on single use slaves if this is a problem.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-25 Thread W Chan
Thanks.  I will do that today and follow up with a description of the
proposal.


On Mon, Feb 24, 2014 at 10:21 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 In process is fine to me.

 Winson, please register a blueprint for this change and put the link in
 here so that everyone can see what it all means exactly. My feeling is that
 we can approve and get it done pretty soon.

 Renat Akhmerov
 @ Mirantis Inc.



 On 25 Feb 2014, at 12:40, Dmitri Zimine d...@stackstorm.com wrote:

  I agree with Winson's points. Inline.
 
  On Feb 24, 2014, at 8:31 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:
 
 
  On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
 
  As I understand, the local engine runs the task immediately whereas
 the scalable engine sends it over the message queue to one or more
 executors.
 
  Correct.
 
  Note: that local is confusing here, in process will reflect what it
 is doing better.
 
 
  In what circumstances would we see a Mistral user using a local engine
 (other than testing) instead of the scalable engine?
 
  Yes, mostly testing we it could be used for demonstration purposes also
 or in the environments where installing RabbitMQ is not desirable.
 
  If we are keeping the local engine, can we move the abstraction to the
 executor instead, having drivers for a local executor and remote executor?
  The message flow from the engine to the executor would be consistent, it's
 just where the request will be processed.
 
  I think I get the idea and it sounds good to me. We could really have
 executor in both cases but the transport from engine to executor can be
 different. Is that what you're suggesting? And what do you call driver here?
 
  +1 to abstraction to the executor, indeed the local and remote engines
 today differ only by how they invoke executor, e.g. transport / driver.
 
 
  And since we are porting to oslo.messaging, there's already a fake
 driver that allows for an in process Queue for local execution.  The local
 executor can be a derivative of that fake driver for non-testing purposes.
  And if we don't want to use an in process queue here to avoid the
 complexity, we can have the client side module of the executor determine
 whether to dispatch to a local executor vs. RPC call to a remote executor.
 
  Yes, that sounds interesting. Could you please write up some etherpad
 with details explaining your idea?
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread W Chan
Sure.  Let me give this some thoughts and work with you separately.  Before
we speak up, we should have a proposal for discussion.


On Mon, Feb 24, 2014 at 9:53 PM, Dmitri Zimine d...@stackstorm.com wrote:

 Winson,

 While you're looking into this and working on the design, may be also
 think through other executor/engine communications.

 We talked about executor communicating to engine over 3 channels (DB,
 REST, RabbitMQ) which I wasn't happy about ;) and put it off for some time.
 May be it can be rationalized as part of your design.

 DZ.

 On Feb 24, 2014, at 11:21 AM, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding your comments on change https://review.openstack.org/#/c/75609/,
 I don't think the port to oslo.messaging is just a swap from pika to
 oslo.messaging.  OpenStack services as I understand is usually implemented
 as an RPC client/server over a messaging transport.  Sync vs async calls
 are done via the RPC client call and cast respectively.  The messaging
 transport is abstracted and concrete implementation is done via
 drivers/plugins.  So the architecture of the executor if ported to
 oslo.messaging needs to include a client, a server, and a transport.  The
 consumer (in this case the mistral engine) instantiates an instance of the
 client for the executor, makes the method call to handle task, the client
 then sends the request over the transport to the server.  The server picks
 up the request from the exchange and processes the request.  If cast
 (async), the client side returns immediately.  If call (sync), the client
 side waits for a response from the server over a reply_q (a unique queue
 for the session in the transport).  Also, oslo.messaging allows versioning
 in the message. Major version change indicates API contract changes.  Minor
 version indicates backend changes but with API compatibility.

 So, where I'm headed with this change...  I'm implementing the basic
 structure/scaffolding for the new executor service using oslo.messaging
 (default transport with rabbit).  Since the whole change will take a few
 rounds, I don't want to disrupt any changes that the team is making at the
 moment and so I'm building the structure separately.  I'm also adding
 versioning (v1) in the module structure to anticipate any versioning
 changes in the future.   I expect the change request will lead to some
 discussion as we are doing here.  I will migrate the core operations of the
 executor (handle_task, handle_task_error, do_task_action) to the server
 component when we agree on the architecture and switch the consumer
 (engine) to use the new RPC client for the executor instead of sending the
 message to the queue over pika.  Also, the launcher for
 ./mistral/cmd/task_executor.py will change as well in subsequent round.  An
 example launcher is here
 https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
  The interceptor project here is what I use to research how oslo.messaging
 works.  I hope this is clear. The blueprint only changes how the request
 and response are being transported.  It shouldn't change how the executor
 currently works.

 Finally, can you clarify the difference between local vs scalable engine?
  I personally do not prefer to explicitly name the engine scalable because
 this requirement should be in the engine by default and we do not need to
 explicitly state/separate that.  But if this is a roadblock for the change,
 I can put the scalable structure back in the change to move this forward.

 Thanks.
 Winson

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Artifact dependencies: Strict vs Soft

2014-02-25 Thread Arnaud Legendre
Hi Alexander, Thank you for your input. 

I think we need to clearly define what a version means for an artifact. 
First, I would like to comeback to the definition of an artifact: this broader 
audience might not be aware of this concept. 
As of today, my understanding is the following: 
An artifact is a set of metadata without any pre-defined structure. The only 
contract is that these artifacts will reference one or many blocks of bits 
(potentially images) stored in the Glance storage backends. 
With that in mind, I can see two types of versions: metadata version and the 
version of the actual bits. 
I think the version you are talking about is a mix of two versions you I 
mention above. Could you confirm? 

Now, I have another question: you mention that you have can several versions of 
an artifact accessible in the system: does that mean that the previous versions 
are still available (i.e. both metadata and actual blocks of data are 
available)? Can I rollback and use version #1 if the latest version of my 
artifact is version #2? Based on your question, I think the answer is Yes in 
which case this comes with a lot of other issues: we are dealing with block of 
data that can have big sizes: you need to give the ability to the user to say: 
I want to store only the last 2 versions and not the full history. So, to 
answer you question, I would like to see an API which is providing all the 
versions available (accessible) for a given artifact. Then, it's up to the 
artifact using it to decide which one it should import. 

Thanks, 
Arnaud 



- Original Message -

From: Alexander Tivelkov ativel...@mirantis.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Tuesday, February 25, 2014 3:57:41 AM 
Subject: [openstack-dev] [Glance][Artifacts] Artifact dependencies: Strict vs 
Soft 

Hi folks, 

While I am still working on designing artifact-related APIs (sorry, the task is 
taking me longer then expected due to a heavy load in Murano, related to the 
preparation of incubation request), I've got a topic I wanted to discuss with 
the broader audience. 

It seems like we have agreed on the idea that the artifact storage should 
support dependencies between the artifacts: ability for any given artifact to 
reference some other artifacts as its dependencies, and the API call which will 
allow to retrieve all the dependency graph of the given artifact (i.e. its 
direct and transitive dependecies) 

Another idea which was always kept in mind when we were designing the artifact 
concept was artifact versioning: the system should allow to store different 
artifact having the identical name but different versions, and the API should 
be able to return the latest (based on some notation) version of the artifact. 
Being able to construct such a queries actually gives an ability to define kind 
of aliases, so the url like /v2/artifacts?type=imagename=ubuntuversion=latest 
will always return the latest version of the given artifact (ubuntu image in 
this case). The need to be able to define such aliaces was expressed in [1], 
and the ability to satisfy this need with artifact API was mentioned at [2] 

But combining these two ideas brings up an interesting question: how should 
artifacts define their dependencies? Should this be an explicit strict 
reference (i.e. referencing the specific artifact by its id), or it should be 
an implicit soft reference, similar to the alias described above (i.e. 
specifying the dependency as A requires the latest version of B or even A 
requires 0.2=B0.3)? 
The later seems familiar: it is similar to pip dependency specification, right? 
This approach obviosuly may be very usefull (at least I clearly see its 
benefits for Murano's application packages), but it implies lazy evaluation, 
which may dramatically impact the performance. 
In contrary, the former approach - with explicit references - requires much 
less computation. Even more, if we decide that the artifact dependencies are 
immutable, this will allow us to denormalize the storage of the dependency 
graph and store all the transitive dependencies of the given artifact in a flat 
table, so the dependency graph may be returned by a sinle SQL query, without a 
need for recursive calls, which are otherwise unavoidable in the normalized 
database storing such hierarchical structures. 

Meanwhile, the mutability of dependencis is also unclear to me: ability to 
modify them seems to have its own pros and cons, so this is another topic to 
dicsuss. 

I'd like to hear your opinion on all of these. Any feedback is welcome, and we 
may come back to this topic on the Thursday's meeting. 


Thanks! 


[1] https://blueprints.launchpad.net/glance/+spec/glance-image-aliases 
[2] https://blueprints.launchpad.net/glance/+spec/artifact-repository-api 


-- 
Regards, 
Alexander Tivelkov 

___ 
OpenStack-dev mailing list 

[openstack-dev] [third-party-testing] CHANGED TIME and VENUE: Workshop/QA session on third party testing will be on IRC now!

2014-02-25 Thread Jay Pipes
Hi again Stackers,

After discussions with folks on the infrastructure team, I'm making some
changes to the proposed workshop venue and time. As was rightly pointed
out by Jim B and others, we want to encourage folks that are setting up
their CI systems to use the standard communication tools to interact
with the OpenStack community. That standard tool is IRC, with meetings
on Freenode. In addition, Google Hangout is not a free/libre piece of
software, and we want to encourage free and open source contribution and
participation.

Alright, with that said, we will conduct the first 3rd party OpenStack
CI workshop/QA session on Freenode IRC, #openstack-meeting on Monday,
March 3rd, at 13:00 EST (18:00 UTC):

https://wiki.openstack.org/wiki/Meetings#Third_Party_OpenStack_CI_Workshop_and_Q.26A_Meetings

Unlike regular OpenStack team meetings on IRC, there will not be a set
agenda. Instead, the IRC channel will be reserved for folks eager to get
questions about their CI installation answered and are looking for some
debugging assistance with Jenkins, Zuul, Nodepool et al.

I look forward to seeing you there!

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Morgan Fainberg
For purposes of supporting multiple backends for Identity (multiple LDAP, mix 
of LDAP and SQL, federation, etc) Keystone is planning to increase the maximum 
size of the USER_ID field from an upper limit of 64 to an upper limit of 255. 
This change would not impact any currently assigned USER_IDs (they would remain 
in the old simple UUID format), however, new USER_IDs would be increased to 
include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 

There is the obvious concern that projects are utilizing (and storing) the 
user_id in a field that cannot accommodate the increased upper limit. Before 
this change is merged in, it is important for the Keystone team to understand 
if there are any places that would be overflowed by the increased size.

The review that would implement this change in size is 
https://review.openstack.org/#/c/74214 and is actively being worked on/reviewed.

I have already spoken with the Nova team, and a single instance has been 
identified that would require a migration (that will have a fix proposed for 
the I3 timeline). 

If there are any other known locations that would have issues with an increased 
USER_ID size, or any concerns with this change to USER_ID format, please 
respond so that the issues/concerns can be addressed.  Again, the plan is not 
to change current USER_IDs but that new ones could be up to 255 characters in 
length.

Cheers,
Morgan Fainberg
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Ed Hall

On Feb 25, 2014, at 10:10 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
 On Feb 25, 2014 at 3:39 AM, 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:
Agree, however actual hardware is beyond logical LBaaS API but could be a part 
of admin LBaaS API.

Aah yes--  In my opinion, users should almost never be exposed to anything that 
represents a specific piece of hardware, but cloud administrators must be. The 
logical constructs the user is exposed to can come close to what an actual 
piece of hardware is, but again, we should be abstract enough that a cloud 
admin can swap out one piece of hardware for another without affecting the 
user's workflow, application configuration, (hopefully) availability, etc.

I recall you said previously that the concept of having an 'admin API' had been 
discussed earlier, but I forget the resolution behind this (if there was one). 
Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well as 
some of the core features it's going to need, and contrast this with the user 
API (which I think is mostly what Jay and Mark McClain are rightly concerned 
about), it'll start to become obvious which features belong where, and what 
kind of data model will emerge which supports both APIs.

[I’m new to this discussion; my role at my employer has been shifted from an 
internal to a community focus and I’m madly
attempting to come up to speed. I’m a software developer with an operations 
focus; I’ve worked with OpenStack since Diablo
as Yahoo’s team lead for network integration.]

Two levels (user and admin) would be the minimum. But our experience over time 
is that even administrators occasionally
need to be saved from themselves. This suggests that, rather than two or more 
separate APIs, a single API with multiple
roles is needed. Certain operations and attributes would only be accessible to 
someone acting in an appropriate role.

This might seem over-elaborate at first glance, but there are other dividends: 
a single API is more likely to be consistent,
and maintained consistently as it evolves. By taking a role-wise view the 
hierarchy of concerns is clarified. If you focus on
the data model first you are more likely to produce an arrangement that mirrors 
the hardware but presents difficulties in
representing and implementing user and operator intent.

Just some general insights/opinions — take for what they’re worth.

 -Ed

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Jay Pipes
On Mon, 2014-02-24 at 18:07 -0800, Stephen Balukoff wrote:
 Hi y'all,
 
 Jay, in the L7 example you give, it looks like you're setting SSL
 parameters for a given load balancer front-end. 

Correct. The example comes straight out of the same example in the ELB
API documentation. The only difference being in my CLI commands, there's
no mention of a listener, whereas in the ELB examples, there is (since
the ELB API can only configure this on the load balancer by adding or
removing listener objects to/from the load balancer object.

 Do you have an example you can share where where certain traffic is
 sent to one set of back-end nodes, and other traffic is sent to a
 different set of back-end nodes based on the URL in the client
 request? (I'm trying to understand how this can work without the
 concept of 'pools'.)  

Great example. This is quite a common scenario -- consider serving
requests for static images or content from one set of nginx servers and
non-static content from another set of, say, Apache servers running
Tomcat or similar.

OK, I'll try to work through my ongoing CLI suggestions for the
following scenario:

* User has 3 Nova instances running nginx and serving static files.
These instances all have private IP addresses in subnet 192.168.1.0/24.
* User has 3 Nova instances running Apache and tomcat and serving
dynamic content. These instances all have private IP addresses in subnet
192.168.2.0/24
* User wants any traffic coming in to the balancer's front-end IP with a
URI beginning with static.example.com to get directed to any of the
nginx nodes
* User wants any other traffic coming in to the balancer's front-end IP
to get directed to any of the Apache nodes
* User wants sticky session handling enabled ONLY for traffic going to
the Apache nodes

Here is what some proposed CLI commands might look like in my
user-centric flow of things:

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:
 
neutron balancer-create --type=advanced --front=ip \
 --back=list_of_ips --algorithm=least-connections \
 --topology=active-standby

Note that in the above call, list_of_ips includes **all of the Nova
instances that would be balanced across**, including all of the nginx
and all of the Apache instances.

Now, let's set up our static balancing. First, we'd create a new L7
policy, just like the SSL negotiation one in the previous example:

neutron l7-policy-create --type=uri-regex-matching \
 --attr=URIRegex=static\.example\.com.*

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer and
spreading load to the nginx nodes by doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.1.0/24

We could then indicate to the balancer that all other traffic should be
sent to only the Apache nodes:

neutron l7-policy-create --type=uri-regex-matching \
 --attr=URIRegex=static\.example\.com.* \
 --attr=RegexMatchReverse=true

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.2.0/24

 Also, what if the first group of nodes needs a different health check
 run against it than the second group of nodes?

neutron balancer-apply-healthcheck $BALANCER_ID $HEALTHCHECK_ID \
 --subnet-cidr=192.168.1.0/24

where $HEALTHCHECK_ID would be the ID of a simple healthcheck object.

The biggest advantage to this proposed API and CLI is that we are not
introducing any terminology into the Neutron LBaaS API that is not
necessary when existing terms in the main Neutron API already exist to
describe such things. You will note that I do not use the term pool
above, since the concept of a subnet (and its associated CIDR) are
already well-established objects in the Neutron API and can serve the
exact same purpose for Neutron LBaaS API.

 As far as hiding implementation details from the user:  To a certain
 degree I agree with this, and to a certain degree I do not: OpenStack
 is a cloud OS fulfilling the needs of supplying IaaS. It is not a
 PaaS. As such, the objects that users deal with largely are analogous
 to physical pieces of hardware that make up a cluster, albeit these
 are virtualized or conceptualized. Users can then use these conceptual
 components of a cluster to build the (virtual) infrastructure they
 need to support whatever application they want. These objects have
 attributes and are expected to act in a certain way, which again, are
 usually analogous to actual hardware.

I disagree. A cloud API should strive to shield users of the cloud from
having to understand underlying hardware APIs or object models.

 If we were building a PaaS, the story would be a lot different--  but
 what we are building is a cloud OS that provides Infrastructure (as a
 service).

I still think we need to simplify the APIs as much as we can, and remove
the underlying implementation (which includes the database schema and

Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Sławek Kapłoński
Hello,

Trinath, this presentation I saw before You send me it. There is nice 
explanation what methods are (and should be) in type driver and mech driver 
but I need exactly that information what sent me Assaf. Thanks both of You for 
Your help :)

--
Best regards
Sławek Kapłoński
Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:

 - Original Message -
 
  Hi
  
  Hope this helps
  
  http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
  
  ___
  
  Trinath Somanchi
  
  _
  From: Sławek Kapłoński [sla...@kaplonski.pl]
  Sent: Tuesday, February 25, 2014 9:24 PM
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] Neutron ML2 and openvswitch agent
  
  Hello,
  
  I have question to You guys. Can someone explain me (or send to link
  with such explanation) how exactly ML2 plugin which is working on
  neutron server is communicating with compute hosts with openvswitch
  agents?
 
 Maybe this will set you on your way:
 ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
 uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
 the topic stated in that file.
 
 When the message is received by the OVS agent, it calls:
 neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
 update.
  I suppose that this is working with rabbitmq queues but I need
  to add own function which will be called in this agent and I don't know
  how to do that. It would be perfect if such think will be possible with
  writing for example new mechanical driver in ML2 plugin (but how?).
  Thanks in advance for any help from You :)
  
  --
  Best regards
  Slawek Kaplonski
  sla...@kaplonski.pl
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Ed,

That sounds good to me, actually:  As long as 'cloud admin' API functions
are represented as well as 'simple user workflows', then I'm all for a
unified API that simply exposes more depending on permissions.

Stephen


On Tue, Feb 25, 2014 at 12:15 PM, Ed Hall edh...@yahoo-inc.com wrote:


  On Feb 25, 2014, at 10:10 AM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

On Feb 25, 2014 at 3:39 AM, enikano...@mirantis.com wrote:

 Agree, however actual hardware is beyond logical LBaaS API but could
 be a part of admin LBaaS API.


  Aah yes--  In my opinion, users should almost never be exposed to
 anything that represents a specific piece of hardware, but cloud
 administrators must be. The logical constructs the user is exposed to can
 come close to what an actual piece of hardware is, but again, we should
 be abstract enough that a cloud admin can swap out one piece of hardware
 for another without affecting the user's workflow, application
 configuration, (hopefully) availability, etc.

  I recall you said previously that the concept of having an 'admin API'
 had been discussed earlier, but I forget the resolution behind this (if
 there was one). Maybe we should revisit this discussion?

  I tend to think that if we acknowledge the need for an admin API, as
 well as some of the core features it's going to need, and contrast this
 with the user API (which I think is mostly what Jay and Mark McClain are
 rightly concerned about), it'll start to become obvious which features
 belong where, and what kind of data model will emerge which supports both
 APIs.


  [I’m new to this discussion; my role at my employer has been shifted from
 an internal to a community focus and I’m madly
 attempting to come up to speed. I’m a software developer with an
 operations focus; I’ve worked with OpenStack since Diablo
 as Yahoo’s team lead for network integration.]

 Two levels (user and admin) would be the minimum. But our experience over
 time is that even administrators occasionally
 need to be saved from themselves. This suggests that, rather than two or
 more separate APIs, a single API with multiple
 roles is needed. Certain operations and attributes would only be
 accessible to someone acting in an appropriate role.

  This might seem over-elaborate at first glance, but there are other
 dividends: a single API is more likely to be consistent,
 and maintained consistently as it evolves. By taking a role-wise view the
 hierarchy of concerns is clarified. If you focus on
 the data model first you are more likely to produce an arrangement that
 mirrors the hardware but presents difficulties in
 representing and implementing user and operator intent.

  Just some general insights/opinions — take for what they’re worth.

   -Ed


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Update on new project creation

2014-02-25 Thread Monty Taylor

Hey all!

You may or may not have noticed that there's been a backlog with new 
project creation in infra. There are some specific issues with our 
automation that are causing this, and we've got a plan in place to fix 
them. Until we do, which is targeted to be done by the end of March, 
we're expecting there to continue to be delays in project creation.


To help mitigate the pain around that, we've decided two new things:

** Topic name: new-project **

First of all, if you are submitting a change to create a new project, 
we're going to require that you set the topic name to new-project. This 
will allow us to easily batch-review and process the requests.


** New Project Fridays **

We're going to have to manually run some scripts for new projects now 
until the automation is fixed. To keep the crazy down, we will be having 
New Project Fridays. Which means we'll get down and dirty with approving 
and running the scripts for new projects on Fridays.


Sorry for the inconvenience. There are a bunch of moving pieces to 
adding a new project, and we've kinda hit the limit of the current 
automation, but hope to have it all fixed up soon.


Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] status and feedback

2014-02-25 Thread Tim Hinrichs
Hi all,

I wanted to send out a quick status update on Congress and start a discussion 
about short-term goals.

1) Logistics

IRC Meeting time
Tuesday 1700 UTC in openstack-meeting-3
Every other week starting Feb 25
(Note this is a new meeting time/location.)

2) We have two design docs, which we would like feedback on.

Toplevel design doc:
https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit

Data integration design doc:
https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit

3) Short term goals.

I think it's useful to get an end-to-end system up and running to make it 
easier to communicate what we're driving at.  To that end, I'm suggesting we 
take the following policy and build up enough of Congress to make it work.

Every network connected to a VM must either be public or owned by someone in 
the same group as the VM owner

This example is compelling because it combines information from Neutron, Nova, 
and Keystone (or another group-management data-source such as ActiveDirectory). 
 To that end, I suggest focusing on the following tasks in the short-term.

- Data integration framework, including read-only support for 
Neutron/Nova/Keystone
- Exposing policy engine via API
- Policy engine error handling
- Simple scaling tests
- Basic user docs
- Execution framework: it would be nice if we could actually execute some of 
the actions we tell Congress about, but this is lowest priority on the list, 
for me.

Thoughts?
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Jay Pipes
On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
 For purposes of supporting multiple backends for Identity (multiple
 LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
 increase the maximum size of the USER_ID field from an upper limit of
 64 to an upper limit of 255. This change would not impact any
 currently assigned USER_IDs (they would remain in the old simple UUID
 format), however, new USER_IDs would be increased to include the IDP
 identifier (e.g. USER_ID@@IDP_IDENTIFIER).

-1

I think a better solution would be to have a simple translation table
only in Keystone that would store this longer identifier (for folks
using federation and/or LDAP) along with the Keystone user UUID that is
used in foreign key relations and other mapping tables through Keystone
and other projects.

The only identifiers that would ever be communicated to any non-Keystone
OpenStack endpoint would be the UUID user and tenant IDs.

 There is the obvious concern that projects are utilizing (and storing)
 the user_id in a field that cannot accommodate the increased upper
 limit. Before this change is merged in, it is important for the
 Keystone team to understand if there are any places that would be
 overflowed by the increased size.

I would go so far as to say the user_id and tenant_id fields should be
*reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
performance reasons. Lengthening commonly-used and frequently-joined
identifier fields is not a good option, IMO.

Best,
-jay

 The review that would implement this change in size
 is https://review.openstack.org/#/c/74214 and is actively being worked
 on/reviewed.
 
 
 I have already spoken with the Nova team, and a single instance has
 been identified that would require a migration (that will have a fix
 proposed for the I3 timeline). 
 
 
 If there are any other known locations that would have issues with an
 increased USER_ID size, or any concerns with this change to USER_ID
 format, please respond so that the issues/concerns can be addressed.
  Again, the plan is not to change current USER_IDs but that new ones
 could be up to 255 characters in length.
 
 
 Cheers,
 Morgan Fainberg
 —
 Morgan Fainberg
 Principal Software Engineer
 Core Developer, Keystone
 m...@metacloud.com
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Sylvain Bauza
Well, I agreed with the fact I switched some way the use of this feature to
match my needs, but then let me ask you a quick question : how can handle
WSME variable request body ?

The first glance I have is that WSME is requiring a static API in terms of
inputs, could you then confirm ?
Do you have any idea on how I could get my goal, ie. having a static input
plus some extra variable inputs ? I was also thinking about playing with
__getattr__ and __setattr__ but I'm not sure the Registry could handle that.

One last important point, this API endpoint (Host) is admin-only in case of
you mention the potential security issues it could lead.

Thanks for your help,
-Sylvain


2014-02-25 18:55 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:

 OK, that's not how that feature is meant to be used.

 The idea is that on application startup plugins or extensions will be
 loaded that configure the extra attributes for the class. That happens one
 time, and the configuration does not depend on data that appears in the
 request itself.

 Doug


 On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Let me give you a bit of code then, that's currently WIP with heavy
 rewrites planned on the Controller side thanks to Pecan hooks [1]

 So, L102 (GET request) the convert() method is passing the result dict as
 kwargs, where the Host.__init__() method is adding dynamic attributes.
 That does work :-)

 L108, I'm specifying that my body string is basically an Host object.
 Unfortunately, I can provide extra keys to that where I expect to be extra
 attributes. WSME will then convert the body into an Host [2], but as the
 Host class doesn't yet know which extra attributes are allowed, none of my
 extra keys are taken.
 As a result, the 'host' (instance of Host) argument of the post() method
 is not containing the extra attributes and thus, not passed for creation to
 my Manager.

 As said, I can still get the request body using Pecan directly within the
 post() method, but I then would have to manage the mimetype, and do the
 adding of the extra attributes there. That's pretty ugly IMHO.

 Thanks,
 -Sylvain

 [1] http://paste.openstack.org/show/69418/

 [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71


 2014-02-25 14:39 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:




 On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza 
 sylvain.ba...@gmail.comwrote:

 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes to
 a Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to
 WSME to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is
 registered at API startup, not when being called, so the get_arg() method
 fails to fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.


 I'm not sure I understand the question. Are you saying that the dynamic
 type feature works for GET arguments but not POST body content?

 Doug





 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] client 0.5.0 release

2014-02-25 Thread Sergey Lukjanov
Hi folks,

I'm glad to announce that python-savannaclient v0.5.0 released!

pypi: https://pypi.python.org/pypi/python-savannaclient/0.5.0
tarball: 
http://tarballs.openstack.org/python-savannaclient/python-savannaclient-0.5.0.tar.gz
launchpad: https://launchpad.net/python-savannaclient/0.5.x/0.5.0

Notes:

* it's first release with CLI covers mostly all features;
* dev docs moved to client from the main repo;
* support for all new Savanna features introduced in Icehouse release cycle;
* single common entrypoint, actual - savannaclient.client.Client('1.1);
* auth improvements;
* base resource class improvements;
* 93 commits from the prev. release.

Thanks.

On Thu, Feb 20, 2014 at 3:53 AM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Additionally, it contains support for the latest EDP features.


 On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 Hi folks,

 I'd like to make a 0.5.0 release of savanna client soon, please, share
 your thoughts about stuff that should be included to it.

 Currently we have the following major changes/fixes:

 * mostly implemented CLI;
 * unified entry point for python bindings like other OpenStack clients;
 * auth improvements;
 * base resource class improvements.

 Full diff:
 https://github.com/openstack/python-savannaclient/compare/0.4.1...master

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sylvain Bauza
2014-02-25 17:42 GMT+01:00 Dina Belova dbel...@mirantis.com:


  I think it should be a Climate policy (be careful, the name is
 confusing) : if admin wants to grant any new project for reservations, he
 should place a call to Climate. That's up to Climate-Nova (ie. Nova
 extension) to query Climate in order to see if project has been granted or
 not.

 Now I think that it'll be better, yes.
 I see some workflow like:

 1) Mark project as reservable in Climate
 2) When some resource is created (like Nova instance) it should be checked
 (in the API extensions, for example) via Climate if project is reservable.
 If is, and there is no special reservation flags passed, it should be used
 default_reservation stuff for this instance

 Sylvain, is that ira you're talking about?


tl;dr : Yes, let's define/create a new endpoint for the need.

That's exactly what I'm thinking, Climate should manage reservations on its
own (including any new model) and projects using it for reserving resources
should place a call to it in order to get some information.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Steven Hardy
On Tue, Feb 25, 2014 at 11:47:43AM -0800, Morgan Fainberg wrote:
 For purposes of supporting multiple backends for Identity (multiple LDAP, mix 
 of LDAP and SQL, federation, etc) Keystone is planning to increase the 
 maximum size of the USER_ID field from an upper limit of 64 to an upper limit 
 of 255. This change would not impact any currently assigned USER_IDs (they 
 would remain in the old simple UUID format), however, new USER_IDs would be 
 increased to include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 

Hmm, my immediate reaction is there must be a better way than mangling both
bits of data into the ID field, considering pretty much everything
everywhere else in openstack uses uuids for user-visible object identifiers.

 There is the obvious concern that projects are utilizing (and storing) the 
 user_id in a field that cannot accommodate the increased upper limit. Before 
 this change is merged in, it is important for the Keystone team to understand 
 if there are any places that would be overflowed by the increased size.
 
 The review that would implement this change in size is 
 https://review.openstack.org/#/c/74214 and is actively being worked 
 on/reviewed.
 
 I have already spoken with the Nova team, and a single instance has been 
 identified that would require a migration (that will have a fix proposed for 
 the I3 timeline). 
 
 If there are any other known locations that would have issues with an 
 increased USER_ID size, or any concerns with this change to USER_ID format, 
 please respond so that the issues/concerns can be addressed.  Again, the plan 
 is not to change current USER_IDs but that new ones could be up to 255 
 characters in length.

Yes, this will affect Heat in at least one place - we store the trustor
user ID when we create a trust between the stack owner and the heat service
user:

https://github.com/openstack/heat/blob/master/heat/db/sqlalchemy/migrate_repo/versions/027_user_creds_trusts.py#L23

IMHO this is coming pretty late in the cycle considering the potential
impact, but if this is definitely happening we can go ahead an update our
schema.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Libvirt Resize/Cold Migrations and SSH

2014-02-25 Thread Solly Ross
Hi All,
I've been working on/thinking about a bug filed a while ago related to libvirt 
resize/cold migrations.  The bug ended up being roughly as such:

On a Packstack install, cold migrations and resizes fail under the default 
setup with an error about not being able to do an SSH `mkdir` operation.
The case ended up being that Nova was failing to do the resize because the 
individual compute nodes didn't have passwordless (key-based) ssh permissions
into the other compute nodes.

The proposed temporary fix was to manually give the compute nodes SSH 
permissions into each other, with the moderate-term
fix being to have Packstack distribute SSH keys among the compute nodes and set 
up permissions.

While these fixes work, they left me with a certain dirty taste in my mouth, 
since it doesn't seem quite elegant to have Nova SSH-ing around
between compute nodes, and the upstream community seemed to agree with this 
(there was a thread a while ago, but I got sidetracked with other
work).  Upon further investigation, I found four points at which the Nova 
libvirt driver uses SSH, all of which revolve around the method
`migrate_disk_and_power_off` (the main part of the resize/cold migration code):

1. to detect shared storage
2. to create the directory for the instance on the destination system
3. to copy the disk image from the source to the destination system (uses 
either rysnc over ssh or scp)
4. to remove the directory created in (2) in case of an error during the process

Number 1 can be trivially eliminated by using the existing 
'_is_instance_storage_shared' method in the RPCAPI from the compute manager, 
and passing that value to the driver (with the other drivers
most likely ignoring it) instead of checking from within the driver code.  
Numbers 2 and 4 can be eliminated by using a pre_x, x, cleanup_x flow, 
similarly to how live migrations are handled (with
pre_x and cleanup_x being run on the destination machines via the RPCAPI).  
That only leaves number 3.  Note that these are only used when we are going 
between machines without shared storage.
Shared storage eliminates cases 2-4.

So here's my question: can number 3 be elminated, so to speak?  Having to 
give full SSH permissions for a file copy seems a bit overkill (we could, for 
example, run an rsync daemon, in which case
rsync would connect via the daemon and not ssh).  Is it worth it?  
Additionally, if we do not eliminate number 3, is it worth it to refactor the 
code to eliminate numbers 2 and 4 (I already have code
to eliminate number 1 -- see https://gist.github.com/DirectXMan12/9217699).

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-25 Thread Varadhan, Sowmini
Folks,

I'm trying to set up a simple single-node devstack + ml2 + vxlan
combination, and though this ought to be a simple RTFM exercise,
I'm having some trouble setting this up. Perhaps I'm doing something
wrong- clues would be welcome.

I made sure to use ovs_version 1.10.2, and followed
the instructions in https://wiki.openstack.org/wiki/Neutron/ML2
(and then some, based on various and sundry blogs that google found)

Can someone share (all) the contents of their localrc,
and if possible, a description of their VM (virtualbox?  qemu-kvm?)
setup so that I can compare against my env?

FWIW, I tried the attached configs.
localrc.all - sets up
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
Resulting VM boots, but no vxlan interfaces show up (see ovs-ctl.out.all)

localrc.vxlan.only - disallow anything other than vxlan and gre.
VM does not boot- I get a binding_failed error. See ovs-ctl.out.vxlan.only

Thanks in advance,
Sowmini
OFFLINE=False
RECLONE=yes

HOST_IP=192.168.122.198
PUBLIC_INTERFACE=eth1
SERVICE_HOST=$HOST_IP

MULTI_HOST=1
LOGFILE=$HOME/logs/devstack.log
LOGDAYS=7
SCREEN_LOGDIR=$HOME/logs/screen
LOG_COLOR=False

DATABASE_USER=root
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password

ADMIN_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password

Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)


disable_service n-net
disable_service tempest
disable_service horizon
disable_service cinder
disable_service heat

enable_service  neutron
enable_service  q-agt
enable_service  q-svc
enable_service  q-l3
enable_service  q-dhcp


SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# disable_all_services
#
# enable_service   g-api
# enable_service   glance
# enable_service   keystone
# enable_service   nova
# enable_service   quantum
# enable_service   rabbit

disable_service  n-net

CINDER_BRANCH=master
GLANCE_BRANCH=master
HEAT_BRANCH=master
HORIZON_BRANCH=master
KEYSTONE_BRANCH=master
NOVA_BRANCH=master
QUANTUM_BRANCH=master
SWIFT_BRANCH=master
TEMPEST_BRANCH=master



#FLOATING_RANGE=10.10.37.0/24
FLOATING_RANGE=10.10.30.0/24
Q_FLOATING_ALLOCATION_POOL=start=10.10.30.64,end=10.10.30.127
FIXED_NETWORK_SIZE=256
SWIFT_HASH=password


OFFLINE=False
RECLONE=yes

HOST_IP=192.168.122.198
# PUBLIC_INTERFACE=eth1
SERVICE_HOST=$HOST_IP

#MULTI_HOST=1
LOGFILE=$HOME/logs/devstack.log
LOGDAYS=7
SCREEN_LOGDIR=$HOME/logs/screen
LOG_COLOR=False

DATABASE_USER=root
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password

ADMIN_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
#
# fails with vif_type=binding_failed for the router interface?
#
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPES=vxlan
Q_ML2_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,gre
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_types=vxlan)


disable_service n-net
disable_service tempest
disable_service horizon
disable_service cinder
disable_service heat
disable_service swift

enable_service  neutron
enable_service  q-agt
enable_service  q-svc
enable_service  q-l3
enable_service  q-dhcp
enable_service  q-meta


SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# disable_all_services
#
# enable_service   g-api
# enable_service   glance
# enable_service   keystone
# enable_service   nova
# enable_service   quantum
# enable_service   rabbit

disable_service  n-net

CINDER_BRANCH=master
GLANCE_BRANCH=master
HEAT_BRANCH=master
HORIZON_BRANCH=master
KEYSTONE_BRANCH=master
NOVA_BRANCH=master
QUANTUM_BRANCH=/home/sowmini/devstack/neutron
SWIFT_BRANCH=master
TEMPEST_BRANCH=master


FLAT_INTERFACE=eth1
OVS_PHYSICAL_BRIDGE=br-int
Q_USE_SECGROUP=True

#FLOATING_RANGE=10.10.37.0/24
FLOATING_RANGE=10.10.30.0/24
Q_FLOATING_ALLOCATION_POOL=start=10.10.30.64,end=10.10.30.127
FIXED_NETWORK_SIZE=256
SWIFT_HASH=password


owmini@sowmini-virtual-machine:~/devstack/devstack$ sudo ovs-vsctl show
0352c6e8-cced-4f21-8cff-36550186b4b8
Bridge br-int
Port qr-c4d5a7c3-69
tag: 1
Interface qr-c4d5a7c3-69
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port qg-f70ef8ee-65
Interface qg-f70ef8ee-65
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun

[openstack-dev] [Fuel] Weekly meetings

2014-02-25 Thread Vladimir Kozhukalov
Hi folks,

Fuel team is glad to announce that we scheduled weekly IRC meeting. It is
supposed to be held on Thursdays at 19:00 UTC in #openstack-meeting. Our
first meeting is scheduled on 2/27/2014.

We are trying to become even more open. Please feel free to add topics to
meeting agenda https://wiki.openstack.org/wiki/Meetings/Fuel#Agenda.

Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday February 25th at 19:00 UTC

2014-02-25 Thread Elizabeth Krumbach Joseph
On Mon, Feb 24, 2014 at 11:21 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday February 25th, at 19:00 UTC in
 #openstack-meeting

Thanks to everyone who was able to make it to our meeting, minutes and log here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.log.htm

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Matt Riedemann



On 2/25/2014 6:00 AM, Christopher Yeoh wrote:

On Tue, 25 Feb 2014 10:31:42 +
John Garbutt j...@johngarbutt.com wrote:


On 25 February 2014 06:11, Christopher Yeoh cbky...@gmail.com wrote:

On Mon, 24 Feb 2014 17:37:04 -0800
Dan Smith d...@danplanet.com wrote:


onSharedStorage = True
on_shared_storage = False


This is a good example. I'm not sure it's worth breaking users _or_
introducing a new microversion for something like this. This is
definitely what I would call a purity concern as opposed to
usability.


I thought micro versioning was so we could make backwards compatible
changes. If we make breaking changes we need to support the old and
the new for a little while.


Isn't the period that we have to support the old and the new for these
sorts of breaking the changes exactly the same period of time that we'd
have to keep V2 around if we released V3? Either way we're forcing
people off the old behaviour.


I am tempted to say the breaking changes just create a new extension,
but there are other ways...


Oh, please no :-) Essentially that is no different to creating a new
extension in the v3 namespace except it makes the v2 namespace even
more confusing?


For return values:
* get new clients to send Accepts headers, to version the response
* this amounts to the major version
* for those request the new format, they get the new format
* for those getting the old format, they get the old format

For this case, on requests:
* we can accept both formats, or maybe that also depends on the
Accepts headers (with is a bit funky, granted).
* only document the new one
* maybe in two years remove the old format? maybe never?



So the idea of accept headers seems to me like just an alternative to
using a different namespace except a new namespace is much cleaner.


Same for URLs, we could have the old a new names, with the new URL
always returning the new format (think instace_actions -
server_actions).

If the code only differers in presentation, that implies much less
double testing that two full versions of the API. It seems like we
could make some of these clean ups in, and keep the old version, with
relatively few changes.


As I've said before the API layer is very thin. Essentially most of it
is just about parsing the input, calling something, then formatting the
output. But we still do double testing even though the difference
between them most of the time is just presentation.  Theoretically if
the unittests were good enough in terms of checking the API we'd only
have to tempest test a single API but I think experience has shown that
we're not that good at doing exhaustive unittests. So we use the
fallback of throwing tempest at both APIs


Not even Tempest in a lot of cases, like the host_maintenance_mode virt 
driver APIs that are only implemented in VMware and XenAPI virt drivers, 
we have no Tempest coverage there because the libvirt driver doesn't 
implement that API.





We could port the V2 classes over to the V3 code, to get the code
benefits.


I'm not exactly sure what you mean here. If you mean backporting say
the V3 infrastructure so V2 can use it, I don't want people
underestimating the difficulty of that. When we developed the new
architecture we had the benefit of being able to bootstrap it without
it having to work for a while. Eg. getting core bits like servers and
images up and running without having to have the additional parts which
depend on it working with it yet. With V2 we can't do that, so
operating on a active system is going to be more difficult. The CD
people will not be happy with breakage :-)

But even then it took a considerable amount of effort - both coding and
review to get the changes merged, and that was back in Havana when it
was easier to review bandwidth. And we also discovered that especially
with that sort of infrastructure work its very difficult to get many
people working parallel - or even one person working on too many things
at one time. Because you end up in merge confict/rebase hell. I've been
there a lot in Havana and Icehouse.


+1 to not backporting all of the internal improvements from V3 to V2. 
That'd be a ton of duplicate code and review time and if we aren't 
making backwards incompatible changes to V2 I don't see the point, we're 
just kicking the can down the road on when we do need to make backwards 
incompatible changes and require a new API major version bump for 
insert shiny new thing here.





Return codes are a bit harder, it seems odd to change those based on
Accepts headers, but maybe I could live with that.


Maybe this is the code mess we were trying to avoid, but I feel we
should at least see how bad this kind of approach would look?


So to me this approach really doesn't look a whole lot different to
just having a separate v2/v3 codebase in terms of maintenance. LOC
would be lower, but testing load is similar if we make the same sorts
of changes. Some things like input validation are a bit harder to
implement (because 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
 +1, seems would could explore for another cycle just to find out that
 backporting everything to V2 isn't going to be what we want, and now
 we've just wasted more time.

 If we say it's just deprecated and frozen against new features, then
 it's maintenance is just limited to bug fixes right?

No, any time we make a change to how the api communicates with compute,
conductor, or the scheduler, both copies of the API code have to be
changed. If we never get rid of v2 (and I don't think we have a good
reason to right now) then we're doing that *forever*. I do not want to
sign up for that.

I'm really curious what deployers like RAX, HP Cloud, etc think about
freezing V2 to features and having to deploying V3 to get them. Does RAX
expose V3 right now? Also curious if RAX/HP/etc see the V3 value
statement when compared to what it will mean for their users.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-25 Thread Joe Gordon
On Mon, Feb 24, 2014 at 4:56 PM, Ziad Sawalha
ziad.sawa...@rackspace.com wrote:
 Seeking some clarification on the OpenStack hacking guidelines for
 multi-string docstrings.

 Q: In OpenStack projects, is a blank line before the triple closing quotes
 recommended (and therefore optional - this is what PEP-257 seems to
 suggest), required, or explicitly rejected (which could be one way to
 interpret the hacking guidelines since they omit the blank line).

 This came up in a commit review, and here are some references on the topic:

Link?

Style should never ever be enforced by a human, if the code passed
the pep8 job, then its acceptable.


 Quoting PEP-257: The BDFL [3] recommends inserting a blank line between the
 last paragraph in a multi-line docstring and its closing quotes, placing the
 closing quotes on a line by themselves. This way, Emacs' fill-paragraph
 command can be used on it.

 Sample from pep257 (with extra blank line):

 def complex(real=0.0, imag=0.0):
 Form a complex number.

 Keyword arguments:
 real -- the real part (default 0.0)
 imag -- the imaginary part (default 0.0)

 
 if imag == 0.0 and real == 0.0: return complex_zero
 ...


 The multi-line docstring example in
 http://docs.openstack.org/developer/hacking/ has no extra blank line before
 the ending triple-quotes:

 A multi line docstring has a one-line summary, less than 80 characters.

 Then a new paragraph after a newline that explains in more detail any
 general information about the function, class or method. Example usages
 are also great to have here if it is a complex class for function.

 When writing the docstring for a class, an extra line should be placed
 after the closing quotations. For more in-depth explanations for these
 decisions see http://www.python.org/dev/peps/pep-0257/

 If you are going to describe parameters and return values, use Sphinx, the
 appropriate syntax is as follows.

 :param foo: the foo parameter
 :param bar: the bar parameter
 :returns: return_type -- description of the return value
 :returns: description of the return value
 :raises: AttributeError, KeyError
 

 Regards,

 Ziad

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][third-party-testing] Third Party Test setup and details

2014-02-25 Thread Sukhdev Kapur
Fellow developers,

I just put together a wiki describing the Arista Third Party Setup.
In the attached document we provide a link to the modified Gerrit Plugin to
handle the regex matching for the Comment Added event so that
recheck/reverify no bug/ can be handled.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Have a look. Your feedback/comments will be appreciated.

regards..
-Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Doug Hellmann
On Tue, Feb 25, 2014 at 3:47 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Well, I agreed with the fact I switched some way the use of this feature
 to match my needs, but then let me ask you a quick question : how can
 handle WSME variable request body ?

 The first glance I have is that WSME is requiring a static API in terms of
 inputs, could you then confirm ?


Yes, that's correct.



 Do you have any idea on how I could get my goal, ie. having a static input
 plus some extra variable inputs ? I was also thinking about playing with
 __getattr__ and __setattr__ but I'm not sure the Registry could handle that.


Why don't you know what the data is going to look like before you receive
it?

One last important point, this API endpoint (Host) is admin-only in case of
 you mention the potential security issues it could lead.


My issue with this sort of API isn't security, it's that describing how to
use it for an end user is more difficult than having a clearly defined
static set of inputs and outputs.

Doug




 Thanks for your help,
 -Sylvain


 2014-02-25 18:55 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:

 OK, that's not how that feature is meant to be used.

 The idea is that on application startup plugins or extensions will be
 loaded that configure the extra attributes for the class. That happens one
 time, and the configuration does not depend on data that appears in the
 request itself.

 Doug


 On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza 
 sylvain.ba...@gmail.comwrote:

 Let me give you a bit of code then, that's currently WIP with heavy
 rewrites planned on the Controller side thanks to Pecan hooks [1]

 So, L102 (GET request) the convert() method is passing the result dict
 as kwargs, where the Host.__init__() method is adding dynamic attributes.
 That does work :-)

 L108, I'm specifying that my body string is basically an Host object.
 Unfortunately, I can provide extra keys to that where I expect to be extra
 attributes. WSME will then convert the body into an Host [2], but as the
 Host class doesn't yet know which extra attributes are allowed, none of my
 extra keys are taken.
 As a result, the 'host' (instance of Host) argument of the post() method
 is not containing the extra attributes and thus, not passed for creation to
 my Manager.

 As said, I can still get the request body using Pecan directly within
 the post() method, but I then would have to manage the mimetype, and do the
 adding of the extra attributes there. That's pretty ugly IMHO.

 Thanks,
 -Sylvain

 [1] http://paste.openstack.org/show/69418/

 [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71


 2014-02-25 14:39 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:




 On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza sylvain.ba...@gmail.com
  wrote:

 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes
 to a Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to
 WSME to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is
 registered at API startup, not when being called, so the get_arg() method
 fails to fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.


 I'm not sure I understand the question. Are you saying that the dynamic
 type feature works for GET arguments but not POST body content?

 Doug





 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-25 Thread Sukhdev Kapur
Folks,

I just sent out another email.

Here is the link to the wiki which has details about this patch.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Hope this helps.

-Sukhdev




On Fri, Feb 14, 2014 at 6:01 PM, Sukhdev Kapur
sukh...@aristanetworks.comwrote:




 On Thu, Feb 13, 2014 at 12:39 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
  Jay,
 
  Just an FYI. We have modified the Gerrit plugin it accept/match regex
  to generate notifications of for receck no bug/bug ###. It turned
  out to be very simple fix and we (Arista Testing) is now triggering on
  recheck comments as well.

 Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
 where other folks can use it?



 Yes the patch is ready.  I am documenting it as a part of overall
 description of Arista Testing Setup and will be releasing soon as part of
 the document that I am writing.
 Hopefully next week.

 regards..
 -Sukhdev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] client 0.5.0 release

2014-02-25 Thread Sergey Lukjanov
Will be available in global requirements after merging
https://review.openstack.org/76357.

On Wed, Feb 26, 2014 at 12:50 AM, Sergey Lukjanov
slukja...@mirantis.com wrote:
 Hi folks,

 I'm glad to announce that python-savannaclient v0.5.0 released!

 pypi: https://pypi.python.org/pypi/python-savannaclient/0.5.0
 tarball: 
 http://tarballs.openstack.org/python-savannaclient/python-savannaclient-0.5.0.tar.gz
 launchpad: https://launchpad.net/python-savannaclient/0.5.x/0.5.0

 Notes:

 * it's first release with CLI covers mostly all features;
 * dev docs moved to client from the main repo;
 * support for all new Savanna features introduced in Icehouse release cycle;
 * single common entrypoint, actual - savannaclient.client.Client('1.1);
 * auth improvements;
 * base resource class improvements;
 * 93 commits from the prev. release.

 Thanks.

 On Thu, Feb 20, 2014 at 3:53 AM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Additionally, it contains support for the latest EDP features.


 On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 Hi folks,

 I'd like to make a 0.5.0 release of savanna client soon, please, share
 your thoughts about stuff that should be included to it.

 Currently we have the following major changes/fixes:

 * mostly implemented CLI;
 * unified entry point for python bindings like other OpenStack clients;
 * auth improvements;
 * base resource class improvements.

 Full diff:
 https://github.com/openstack/python-savannaclient/compare/0.4.1...master

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-25 Thread Mark McClain

On Feb 25, 2014, at 1:06 AM, Rabi Mishra ramis...@redhat.com wrote:

 Hi All,
 
 'subnet_id' attribute of LBaaS Pool resource has been documented as The 
 network that pool members belong to
 
 However, with 'HAProxy' driver, it allows to add members belonging to 
 different subnets/networks to a lbaas Pool.  
 
Rabi-

The documentation is a bit misleading here.  The subnet_id in the pool is used 
to create the port that the load balancer instance uses to connect with the 
members.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question

2014-02-25 Thread Robert Collins
In the tripleo meeting today we re-affirmed that the tripleo-cd-admins
team is aimed at delivering production-availability clouds - thats how
we know the the tripleo program is succeeding (or not !).

So if you're a member of that team, you're on the hook - effectively
on call, where production issues will take precedence over development
/ bug fixing etc.

We have the following clouds today:
cd-undercloud (baremetal, one per region)
cd-overcloud (KVM in the HP region, not sure yet for the RH region) -
multi region.
ci-overcloud (same as cd-overcloud, and will go away when cd-overcloud
is robust enough).

And we have two users:
 - TripleO ATCs, all of whom are eligible for accounts on *-overcloud
 - TripleO reviewers, indirectly via openstack-infra who provide 99%
of the load on the clouds

Right now when there is a problem, there's no clearly defined 'get
hold of someone' mechanism other than IRC in #tripleo.

And thats pretty good since most of the admins are on IRC most of the time.

But.

There are two holes - a) what if its sunday evening :) and b) what if
someone (for instance Derek) has been troubleshooting a problem, but
needs to go do personal stuff, or you know, sleep. There's no reliable
defined handoff mechanism.

So - I think we need to define two things:
  - a stock way for $randoms to ask for support w/ these clouds that
will be fairly low latency and reliable.
  - a way for us to escalate to each other *even if folk happen to be
away from the keyboard at the time*.
And possibly a third:
  - a way for openstack-infra admins to escalate to us in the event of
OMG things happening. Like, we send 1000 VMs all at once at their git
mirrors or something.

And with that lets open the door for ideas!

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] need advice on how to supply automated testing with bugfix patch

2014-02-25 Thread Chris Friesen


I'm in the process of putting together a bug report and a patch for 
properly handling resource tracking on live migration.


The change involves code that will run on the destination compute node 
in order to properly account for the resources that the instance to be 
migrated will consume.


Testing it manually is really simple...start with an instance on one 
compute node, check the hypervisor stats on the destination node, 
trigger a live-migration, and immediately check the hypervisor stats 
again.  With the current code the hypervisor doesn't update until the 
audit runs, with the patch it updates right away.


I can see how to do a tempest testcase for this, but I don't have a good 
handle on how to set this up as a unit test.  I *think* it should be 
possible to modify _test_check_can_live_migrate_destination() but it 
would mean setting up fake resource tracking and adding fake resources 
(cpu/memory/disk) to the fake instance being fake migrated and I don't 
have any experience with that.


Anyone have any suggestions?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >