[openstack-dev] [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-18 Thread Isaku Yamahata
Hello. This is a reminder for servicevm framework IRC meeting.
date: March 18 (Tuesday) 23:00 UTC
channel: #openstack-meeting

the followings are proposed as agenda.
Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM

* the current status summary
* decide the time/day/frequency

Thanks,
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Thomas Goirand
Hi,

We're now 1 month away from the scheduled release date. It is my strong
opinion (as the main Debian OpenStack package maintainer) that for the
last Havana release, the freeze of dependency happened really too late,
creating issues hard to deal with on the packaging side. I believe it
would be also hard to deal with for Ubuntu people (with the next LTS
releasing soon).

I'd be in the favor to freeze the dependencies for Icehouse *right now*
(including version updates which aren't packaged yet in Debian).
Otherwise, it may be very hard for me to get things pass the FTP masters
NEW queue in time for new packages.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need help with unit test framework, trying to fix bug 1292963

2014-03-18 Thread Chris Friesen

On 03/17/2014 04:28 PM, Chris Friesen wrote:


The second one filters out all of the objects and returns nothing.

(Pdb) query_prefix.filter(models.Instance.vm_state != 
vm_states.SOFT_DELETED).all()
[]


I think I've found another problem.  (The rabbit hole continues...)

It appears that by design in SQL and sqlalchemy the comparison operators 
do not test NULL values, so the above filter will not return objects 
where vm_state is NULL.  This seems to me to be extra confusing in 
sqlalchemy since you can in fact use the comparison operators to 
explicitly test against None.


The problem is that in the Instance object the vm_state field is 
declared as nullable.  In many cases vm_state will in fact have a 
value, but in get_test_instance() in test/utils.py the value of 
vm_state is not specified.


Given the above, it seems that either we need to configure 
models.Instance.vm_state as not nullable (and deal with the fallout), 
or else we need to update instance_get_all_by_filters() to explicitly 
check for None--something like this perhaps:


if not filters.pop('soft_deleted', False):
query_prefix = query_prefix.\
filter(or_(models.Instance.vm_state != vm_states.SOFT_DELETED,
   models.Instance.vm_state == None))


I could do the latter--should I open another bug and make the fix for 
bug 1292963 dependent on the other fix going in first?


I wonder if we have other similar tests that might behave unexpectedly?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-18 Thread Noorul Islam Kamal Malmiyoda
On Tue, Mar 18, 2014 at 10:43 AM, Adrian Otto adrian.o...@rackspace.com wrote:
 Solum Cores,

 I propose the following changes to the Solum core reviewer team:

 +gokrokve
 +julienvey
 +devdatta-kulkarni
 -kgriffs (inactivity)
 -russelb (inactivity)


+1 :)

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-18 Thread balaj...@freescale.com
Hi Isaku Yamahata,

Is it possible to have any convenient slot between 4.00 - 6.30 PM - UTC.

So, that folks from asia can also join the meetings.

Regards,
Balaji.P

 -Original Message-
 From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
 Sent: Tuesday, March 18, 2014 11:35 AM
 To: OpenStack Development Mailing List
 Cc: isaku.yamah...@gmail.com
 Subject: [openstack-dev] [Neutron] advanced servicevm framework IRC
 meeting March 18(Tuesday) 23:00 UTC
 
 Hello. This is a reminder for servicevm framework IRC meeting.
 date: March 18 (Tuesday) 23:00 UTC
 channel: #openstack-meeting
 
 the followings are proposed as agenda.
 Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM
 
 * the current status summary
 * decide the time/day/frequency
 
 Thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][OVS Agent]

2014-03-18 Thread Nader Lahouti
Hi All,

In a multi-node setup, I'm using Ml2Plugin (as core plugin) and OVS
(OVSNeutronAgent) as an agent on compute nodes. From controller I need to
call a *new method* on agent ( on all compute nodes - using  RPC), to
perform a task (i.e. to communicate with an external process). As I need to
use OVSNeutronAgent, I am thinking the following as potential solution for
adding the new method to the agent:
1. Create new plugin based on existing OVS agent - That means cloning
OVSNeutronAgent and add the new method to that.
2. Create new plugin, which inherits OVSNeutronPlugin - the new plugin
defines the new method, setup_rpc,...
3. Add the new method to the existing OVSNeutronAgent

Please let me know your thought and comments.

Regards,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to extend port capability using binding:profile

2014-03-18 Thread Irena Berezovsky
Hi Li Ma,
ML2 binding:profile is accessible for admin user only.
Currently it can be set via port-create/port-update CLI following this syntax:
'neutron port-create netX --binding:profile type=dict keyX=valX'

BR,
Irena

-Original Message-
From: Li Ma [mailto:m...@awcloud.com] 
Sent: Sunday, March 16, 2014 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] how to extend port capability using binding:profile

Hi all,

I'd like to extend port capability using ML2 binding:profile. I checked the 
official docs and it seems that there's no guide for it.

Is there any CLI support for port binding:profile?
Or is there any development guide on how to set up profile?

--
---
cheers,
Li Ma




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Andreas Jaeger
On 03/18/2014 07:14 AM, Thomas Goirand wrote:
 Hi,
 
 We're now 1 month away from the scheduled release date. It is my strong
 opinion (as the main Debian OpenStack package maintainer) that for the
 last Havana release, the freeze of dependency happened really too late,
 creating issues hard to deal with on the packaging side. I believe it
 would be also hard to deal with for Ubuntu people (with the next LTS
 releasing soon).
 
 I'd be in the favor to freeze the dependencies for Icehouse *right now*
 (including version updates which aren't packaged yet in Debian).
 Otherwise, it may be very hard for me to get things pass the FTP masters
 NEW queue in time for new packages.

I expect that a couple of python-PROJECTclient packages needs to be
released first. I've seen changes to them that are not released and that
might be important for Icehouse to support new features.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-18 Thread Renat Akhmerov
On 18 Mar 2014, at 01:32, Joshua Harlow harlo...@yahoo-inc.com wrote:
 To further this lets continue working on 
 https://etherpad.openstack.org/p/taskflow-mistral and see if we can align 
 somehow

Sure.

 (I hope it's not to late to do this,

Never late IMO.

 seeing that there appears to be a lot of resistance from the mistral 
 community to change.

Could you please let us know how you made this conclusion? I’m frankly 
surprised.. It’s not just my curiosity or something, I want to keep improving 
in what I’m doing.


Generally, just to clarify the situation let me provide our vision of what 
we’re doing at the very high level.

As mentioned many times, we’re still building a PoC. Yes, it turned out to take 
longer which is totally fine since we’ve done a lot of research, lots of coding 
exercises, talks, discussions with our customers. We’ve involved several new 
contributors from two different companies, they have their requirements and use 
cases too. We’ve gathered a lot of specific requirements to what should be a 
workflow engine. This all was the exact intention of that phase of the project: 
understand better what we should build. If you look at Mistral list of 
blueprints you’ll see around 40 of them where 80-90% of them come from real 
needs of real projects. And not everything is still captured in BPs because 
something is still not shaped well enough in our minds.

Thought #1: In POC we’ve been concentrating on use cases and requirements. 
Implementation has been secondary.

TaskFlow or anything else just hasn’t mattered a lot so far. But, at the same 
time, I want to remind that in December we tried to use TaskFlow to implement 
the very basic functionality in Mistral (only dependency based model). 
Honestly, we failed to produce a result that we would be satisfied with since 
TaskFlow lacked, for example, the ability to run tasks in an asynchronous 
manner. This was not a problem at all, this is the real world. So I created a 
BP to address that problem in TaskFlow ([0]). So we decided to proceed with it 
with an intent to rejoin later.
And may be even the most important reason not to use TaskFlow was that we did 
want to have a clear research. We found that less productive to try to build a 
project around an existing library than concentrating on use cases and 
high-level requirements. From my experience, it never works well since in your 
thinking you always stick to limitations of that lib and assumptions made in it.

So we actually talked to people a lot (including Josh) and provided this 
reasoning when this question raised again. Reaction was nearly always positive 
and made a lot of sense to customers and developers.

Thought #2: A library shouldn't drive a project where it’s used.

Project needs should define the requirements to a library. Even being adopted 
in lots of places (this is apparently not true for TaskFlow right now which is 
fully understandable, a pretty young project as well) a library is just one of 
many tools used within a project. But not vice versa.

Thought #3. We 100% admit we’re not ready for incubation right now.

We prepared an incubation request already with most of the formal requirements 
met by the project. And even thought the interest to Mistral is serious (at 
least 5-6 projects intent to use it, one already started playing and 
prototyping with Mistral) we want to be honest with the community in that we’re 
not really ready for incubation since even our own solid understanding of core 
requirements and API/DSL is only on its way. This is just one more argument 
that spending significant time on thinking how to fit TaskFlow into our 
unstable Mistral vision is not affordable for us at the moment. Despite of 
that, last week we started spending that time after Josh started bombing us 
with his concerns :).


If that’s all harsh, sorry. We’re interested in keeping in touch with Josh and 
others.

[0] https://blueprints.launchpad.net/taskflow/+spec/async-tasks

Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][baremetal] Status of nova baremetal and ironic

2014-03-18 Thread Zhongyue Luo
Hi,

If I were to implement a new BM driver then should I propose a BP to Ironic
rather than Nova? We are currently writing a driver internally using
nova-baremetal. My understanding is that nova-baremetal will only merge
critical bug fixes and new features will merge to Ironic, correct? Thanks.



On Sat, Feb 8, 2014 at 1:37 AM, Chris K nobody...@gmail.com wrote:

 Hi Taurus,

 You are correct Ironic is under rapid development right now, there are
 several patches in flight right now critical to Ironic. There have been
 successful tests using Ironic to boot both vm's and real hardware, but at
 this time I would say we are just about out of alpha and entering bata
 stages. Ironic's Goal is to graduate from incubation in the Icehouse cycle,
 and start integration in the J cycle. We are working on a migration path
 from Nova-bm to Ironic as part of this graduation. With out knowing what
 environment you are using Nova-bm it is very difficult to say when you
 should look at switching, but off the top of my head I would say: Look at
 evaluating after the Icehouse release.  I would include with that Check
 back often, things are changing quickly.


 Chris Krelle
 NobodyCam


 On Fri, Feb 7, 2014 at 1:58 AM, Taurus Cheung 
 taurus.che...@harmonicinc.com wrote:

 I am working on deploying images to bare-metal machines using nova
 bare-metal. So far working well.



 I know Ironic is under rapid development. Could I know the current status
 of Ironic and the suitable time to shift from nova baremetal to Ironic?



 Regards,

 Taurus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Intel SSG/STO/DCST/CIT*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-18 Thread Serg Melikyan
Who is going to train people on muranoPL, write language books and
tutorials when the same amount of work has already been done for 10+ years
for other languages
In any language most of the time is spent not on learning of language
constructs, but raise learning base classes/functions provided by the
language. No generic Lua tutorials can help you leverage domain-specific
language based on Lua.


On Mon, Mar 17, 2014 at 10:40 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  So I guess this is similar to the other thread.

  http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html

  I know that the way YQL has provided it could be a good example; where
 the core DSL (the select queries and such) are augmented by the addition
 and usage of JS, for example
 http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
  (ignore
 that its XML, haha). Such usage already provides rate-limits and
 execution-limits (
 http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html)
 and afaik if something like what YQL is doing then u don't need to recreate
 simialr features in your DSL (and then u also don't need to teach people
 about a new language and syntax and ...)

  Just an idea (I believe lua offers similar controls/limits.., although
 its not as popular of course as JS).

   From: Stan Lagun sla...@mirantis.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 17, 2014 at 3:59 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?

 Joshua,

  Completely agree with you. We wouldn't be writing another language if we
 knew how any of existing languages can be used for this particular purpose.
 If anyone suggest such language and show how it can be used to solve those
 issues DSL was designed to solve we will consider dropping MuranoPL. np

  Surely DSL hasn't stood the test of time. It just hasn't had a chance
 yet. 100% of successful programming languages were in such position once.

  Anyway it is the best time to come with your suggestions. If you know how
 exactly DSL can be replaced or improved we would like you to share


 On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  I guess I might be a bit biased to programming; so maybe I'm not the
 target audience.

  I'm not exactly against DSL's, I just think that DSL's need to be
 really really proven to become useful (in general this applies to any
 language that 'joe' comp-sci student can create). Its not that hard to just
 make one, but the real hard part is making one that people actually like
 and use and survives the test of time. That's why I think its just nicer to
 use languages that have stood the test of time already (if we can),
 creating a new DSL (muranoPL seems to be slightly more than a DSL imho)
 means creating a new language that has not stood the test of time (in terms
 of lifetime, battle tested, supported over years) so that's just the
 concern I have.

  Of course we have to accept innovation and I hope that the DSL/s makes
 it easier/simpler, I just tend to be a bit more pragmatic maybe in this
 area.

  Here's hoping for the best! :-)

  -Josh

   From: Renat Akhmerov rakhme...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 8:36 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?

   Although being a little bit verbose it makes a lot of sense to me.

  @Joshua,

  Even assuming Python could be sandboxed and whatever else that's needed
 to be able to use it as DSL (for something like Mistral, Murano or Heat) is
 done  why do you think Python would be a better alternative for people who
 don't know neither these new DSLs nor Python itself. Especially, given the
 fact that Python has A LOT of things that they'd never use. I know many
 people who have been programming in Python for a while and they admit they
 don't know all the nuances of Python and actually use 30-40% of all of its
 capabilities. Even not in domain specific development. So narrowing a
 feature set that a language provides and limiting it to a certain domain
 vocabulary is what helps people solve tasks of that specific domain much
 easier and in the most expressive natural way. Without having to learn tons
 and tons of details that a general purpose language (GPL, hah :) ) provides
 (btw, the reason to write thick books).

  I agree with Stan, if you begin to use a technology you'll have to
 learn something anyway, be it TaskFlow API and principles or DSL.
 Well-designed DSL just encapsulates essential principles of a system it is
 used for. By learning DSL you're leaning the system itself, 

Re: [openstack-dev] [heatclient] Jenkins Fail (gate-python-heatclient-pypy FAILURE)

2014-03-18 Thread James Polley
I believe this is the issue being tracked in
https://bugs.launchpad.net/openstack-ci/+bug/1290562

The latest update to that issue has a suggested workaround.


On Tue, Mar 18, 2014 at 6:32 PM, 黎林果 lilinguo8...@gmail.com wrote:

 Hi,

Anybody know the Jenkins Fail?  I've encountered this fail in tow
 patches.

 
 gate-python-heatclient-pypyhttp://logs.openstack.org/58/73558/7/check/gate-python-heatclient-pypy/5157eed
  FAILURE in 1m 28s

 error: option --single-version-externally-managed not recognized

 ERROR: could not install deps 
 [-r/home/jenkins/workspace/gate-python-heatclient-pypy/requirements.txt, 
 -r/home/jenkins/workspace/gate-python-heatclient-pypy/test-requirements.txt]


 http://logs.openstack.org/58/73558/7/check/gate-python-heatclient-pypy/5157eed/console.html

 https://review.openstack.org/#/c/73558/

 Thanks!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Miguel Angel Ajo

Hi Joe, thank you very much for the positive feedback,

   I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under shedskin [1] : nothing done yet.

   It's a short-term alternative until we can have a rootwrap agent,
together with it's integration in neutron (for Juno). As, for the 
compiled rootwrap, if it works, and if it does look good (security wise) 
then we'd have a solution for Icehouse/Havana.


help in [1] is really  welcome ;-) I'm available in #openstack-neutron
as 'ajo'.

   Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/shedskin.rootwrap

On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com mailto:mangel...@redhat.com wrote:


I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat EOF test.py
  import sys
  print hello world
  sys.exit(0)
  EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
   * no logging
   * no six
   * no subprocess

* no *args support
   * https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system(ip a)

$ time ./tmp.py  foo bar  /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py  make


$ time ./tmp  foo bar  /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making rootwrap
supportshedskin is worthwhile.





[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
100%
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make
g++  -O2 -march=native -Wno-deprecated  -I.
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
-lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
  We had this same issue with the dhcp-agent. Code was added that
paralleled
  the initial sync here: https://review.openstack.org/#/c/28914/
that made
  things a good bit faster if I remember correctly. Might be worth
doing
  something similar for the l3-agent.
 
  Best,
 
  Aaron
 
 
  On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon 
joe.gord...@gmail.com mailto:joe.gord...@gmail.com  wrote:
 
 
 
 
 
 
  On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon 
joe.gord...@gmail.com mailto:joe.gord...@gmail.com  wrote:
 
 
 
  I looked into the python to C options and haven't found anything
promising
  yet.
 
 
  I tried Cython, and RPython, on a trivial hello world app, but
git similar
  startup times to standard python.
 
  The one thing that did work was adding a '-S' when starting python.
 
  -S Disable the import of the module site and the site-dependent
manipulations
  of sys.path that it entails.
 
  Using 'python -S' didn't appear to help in devstack
 
  #!/usr/bin/python -S
  # PBR Generated from u'console_scripts'
 
  import sys
  import site
  site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
 
 
 
 
 
 
  I am not sure if we can do that for rootwrap.
 
 
  jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
  hello world
 
  real 0m0.021s
  user 0m0.000s
  sys 0m0.020s
  jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
  hello world
 
  real 0m0.021s
  user 0m0.000s
  sys 0m0.020s
  jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
  hello world
 
  real 0m0.010s
  user 0m0.000s
  sys 0m0.008s
 
  jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
  hello world
 
  

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Thierry Carrez
Joe Gordon wrote:
 And this is a test with shedskin, I suppose that in more complicated
 dependecy scenarios it should perform better.
 
 [majopela@redcylon tmp]$ cat EOF test.py
  import sys
  print hello world
  sys.exit(0)
  EOF
 
 [majopela@redcylon tmp]$ time python test.py
 hello world
 
 real0m0.016s
 user0m0.015s
 sys 0m0.001s
 
 This looks very promising!
 
 A few gotchas:
 
 * Very limited library support
 https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
   * no logging
   * no six
   * no subprocess
 
 * no *args support 
   * https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

This certainly looks promising enough to do a more complete
proof-of-concept around it. This adds packaging complexity and we are
likely to have only a subset of features available, but it may still be
worth it.

I filed the following session so that we can discuss it at the summit:
http://summit.openstack.org/cfp/details/97

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-18 Thread Thierry Carrez
Robert Collins wrote:
 If you set 'version' in setup.cfg, pbr's behaviour will not change at all.
 
 If you do not set 'version' in setup.cfg then:
  - for tagged commits, pbr's behaviour will not change at all.
  - for untagged commits, pbr will change from
 '$last_tag_version.$commit_count.g$sha' to
 '$next_highest_pre_release.dev$commit_count.g$sha'

That sounds sane to me. IIUC it shouldn't impact the release team. The
version number ends up being quite ugly, but in that precise case
prettiness is not a primary goal.

It may impact packagers in some corner cases so I'd engage with them to
check (#openstack-packaging ?) *and*, like Doug recommends, wait for
after icehouse release to make the change.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-18 Thread Huang Zhiteng
On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
zhangleiqi...@huawei.com wrote:
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 18, 2014 10:32 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
 Mapping

 On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
 zhangleiqi...@huawei.com wrote:
  Hi, stackers:
 
  With RDM, the storage logical unit number (LUN) can be directly
 connected to a instance from the storage area network (SAN).
 
  For most data center applications, including Databases, CRM and
 ERP applications, RDM can be used for configurations involving clustering
 between instances, between physical hosts and instances or where SAN-aware
 applications are running inside a instance.
 If 'clustering' here refers to things like cluster file system, which 
 requires LUNs
 to be connected to multiple instances at the same time.
 And since you mentioned Cinder, I suppose the LUNs (volumes) are managed by
 Cinder, then you have an extra dependency for multi-attach
 feature: https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.

 Yes.  Clustering include Oracle RAC, MSCS, etc. If they want to work in 
 instance-based cloud environment, RDM and multi-attached-volumes are both 
 needed.

 But RDM is not only used for clustering, and haven't dependency for 
 multi-attach-volume.

Set clustering use case and performance improvement aside, what other
benefits/use cases can RDM bring/be useful for?

  RDM, which permits the use of existing SAN commands, is
 generally used to improve performance in I/O-intensive applications and block
 locking. Physical mode provides access to most hardware functions of the
 storage system that is mapped.
 It seems to me that the performance benefit mostly from virtio-scsi, which is
 just an virtual disk interface, thus should also benefit all virtual disk 
 use cases
 not just raw device mapping.
 
  For libvirt driver, RDM feature can be enabled through the lun
 device connected to a virtio-scsi controller:
 
  disk type='block' device='lun'
 driver name='qemu' type='raw' cache='none'/
 source
 dev='/dev/mapper/360022a11ecba5db427db0023'/
 target dev='sdb' bus='scsi'/
 address type='drive' controller='0' bus='0'/
  /disk
 
  controller type='scsi' index='0' model='virtio-scsi'/
 
  Currently,the related works in OpenStack as follows:
  1. block-device-mapping-v2 extension has already support the
 lun device with scsi bus type listed above, but cannot make the disk use
 virtio-scsi controller instead of default lsi scsi controller.
  2. libvirt-virtio-scsi-driver BP ([1]) whose milestone target is
 icehouse-3 is aim to support generate a virtio-scsi controller when using an
 image with virtio-scsi property, but it seems not to take boot-from-volume
 and attach-rdm-volume into account.
 
  I think it is meaningful if we provide the whole support for RDM
 feature in OpenStack.
 
  Any thoughts? Welcome any advices.
 
 
  [1]
  https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver
  --
  zhangleiqiang (Trump)
 
  Best Regards
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Regards
 Huang Zhiteng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Constructive Conversations

2014-03-18 Thread Renat Akhmerov
100% support that.

Renat Akhmerov
@ Mirantis Inc.



On 18 Mar 2014, at 02:00, Adrian Otto adrian.o...@rackspace.com wrote:

 Kurt,
 
 I think that a set of community values for OpenStack would be a terrific 
 asset. I refer to values constantly as a way to align my efforts with the 
 needs of my company. I'd love to have the same tools for my contributions to 
 community efforts as well.
 
 Adrian
 
 On Mar 7, 2014, at 11:56 AM, Kurt Griffiths kurt.griffi...@rackspace.com 
 wrote:
 
 Folks,
 
 I’m sure that I’m not the first person to bring this up, but I’d like to get 
 everyone’s thoughts on what concrete actions we, as a community, can take to 
 improve the status quo.
 
 There have been a variety of instances where community members have 
 expressed their ideas and concerns via email or at a summit, or simply 
 submitted a patch that perhaps challenges someone’s opinion of The Right Way 
 to Do It, and responses to that person have been far less constructive than 
 they could have been[1]. In an open community, I don’t expect every person 
 who comments on a ML post or a patch to be congenial, but I do expect 
 community leaders to lead by example when it comes to creating an 
 environment where every person’s voice is valued and respected.
 
 What if every time someone shared an idea, they could do so without fear of 
 backlash and bullying? What if people could raise their concerns without 
 being summarily dismissed? What if “seeking first to understand”[2] were a 
 core value in our culture? It would not only accelerate our pace of 
 innovation, but also help us better understand the needs of our cloud users, 
 helping ensure we aren’t just building OpenStack in the right way, but also 
 building the right OpenStack.
 
 We need open minds to build an open cloud.
 
 Many times, we do have wonderful, constructive discussions, but the times we 
 don’t cause wounds in the community that take a long time to heal. 
 Psychologists tell us that it takes a lot of good experiences to make up for 
 one bad one. I will be the first to admit I’m not perfect. Communication is 
 hard. But I’m convinced we can do better. We must do better.
 
 How can we build on what is already working, and make the bad experiences as 
 rare as possible?
 
 A few ideas to seed the discussion:
 Identify a set of core values that the community already embraces for the 
 most part, and put them down “on paper.”[3] Leaders can keep these values 
 fresh in everyone’s minds by (1) leading by example, and (2) referring to 
 them regularly in conversations and talks.
 PTLs can add mentoring skills and a mindset of seeking first to understand” 
 to their list of criteria for evaluating proposals to add a community member 
 to a core team.
 Get people together in person, early and often. Mid-cycle meetups and 
 mini-summits provide much higher-resolution communication channels than 
 email and IRC, and are great ways to clear up misunderstandings, build 
 relationships of trust, and generally get everyone pulling in the same 
 direction.
 What else can we do?
 
 Kurt
 
 [1] There are plenty of examples, going back years. Anyone who has been in 
 the community very long will be able to recall some to mind. Recent ones I 
 thought of include Barbican’s initial request for incubation on the ML, 
 dismissive and disrespectful exchanges in some of the design sessions in 
 Hong Kong (bordering on personal attacks), and the occasional “WTF?! This is 
 the dumbest idea ever!” patch comment.
 [2] https://www.stephencovey.com/7habits/7habits-habit5.php
 [3] We already have a code of conduct but I think a list of core values 
 would be easier to remember and allude to in day-to-day discussions. I’m 
 trying to think of ways to make this idea practical. We need to stand up for 
 our values, not just say we have them.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-18 Thread Isaku Yamahata
Hi Balaji.

Let's discuss/determine on the time at the meeting as it is listed as agenda.
Sorry for inconvenience for the first time.
Do you have any feedback other than the meeting time?

thanks,

On Tue, Mar 18, 2014 at 06:18:01AM +,
balaj...@freescale.com balaj...@freescale.com wrote:

 Hi Isaku Yamahata,
 
 Is it possible to have any convenient slot between 4.00 - 6.30 PM - UTC.
 
 So, that folks from asia can also join the meetings.
 
 Regards,
 Balaji.P
 
  -Original Message-
  From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
  Sent: Tuesday, March 18, 2014 11:35 AM
  To: OpenStack Development Mailing List
  Cc: isaku.yamah...@gmail.com
  Subject: [openstack-dev] [Neutron] advanced servicevm framework IRC
  meeting March 18(Tuesday) 23:00 UTC
  
  Hello. This is a reminder for servicevm framework IRC meeting.
  date: March 18 (Tuesday) 23:00 UTC
  channel: #openstack-meeting
  
  the followings are proposed as agenda.
  Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM
  
  * the current status summary
  * decide the time/day/frequency
  
  Thanks,
  --
  Isaku Yamahata isaku.yamah...@gmail.com
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-18 Thread Zhangleiqiang (Trump)
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 18, 2014 4:40 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
 Mapping
 
 On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
 zhangleiqi...@huawei.com wrote:
  From: Huang Zhiteng [mailto:winsto...@gmail.com]
  Sent: Tuesday, March 18, 2014 10:32 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
  Mapping
 
  On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
  zhangleiqi...@huawei.com wrote:
   Hi, stackers:
  
   With RDM, the storage logical unit number (LUN) can be
   directly
  connected to a instance from the storage area network (SAN).
  
   For most data center applications, including Databases, CRM
   and
  ERP applications, RDM can be used for configurations involving
  clustering between instances, between physical hosts and instances or
  where SAN-aware applications are running inside a instance.
  If 'clustering' here refers to things like cluster file system, which
  requires LUNs to be connected to multiple instances at the same time.
  And since you mentioned Cinder, I suppose the LUNs (volumes) are
  managed by Cinder, then you have an extra dependency for multi-attach
  feature:
 https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
 
  Yes.  Clustering include Oracle RAC, MSCS, etc. If they want to work in
 instance-based cloud environment, RDM and multi-attached-volumes are both
 needed.
 
  But RDM is not only used for clustering, and haven't dependency for
 multi-attach-volume.
 
 Set clustering use case and performance improvement aside, what other
 benefits/use cases can RDM bring/be useful for?

Thanks for your reply.

The advantages of Raw device mapping are all introduced by its capability of 
pass scsi command to the device, and the most common use cases are clustering 
and performance improvement mentioned above.

And besides these two scenarios, there is another use case: running SAN-aware 
application inside instances, such as:
1. SAN management app
2. Apps which can offload the device related works, such as snapshot, backup, 
etc, to SAN. 


 
   RDM, which permits the use of existing SAN commands, is
  generally used to improve performance in I/O-intensive applications
  and block locking. Physical mode provides access to most hardware
  functions of the storage system that is mapped.
  It seems to me that the performance benefit mostly from virtio-scsi,
  which is just an virtual disk interface, thus should also benefit all
  virtual disk use cases not just raw device mapping.
  
   For libvirt driver, RDM feature can be enabled through the lun
  device connected to a virtio-scsi controller:
  
   disk type='block' device='lun'
  driver name='qemu' type='raw' cache='none'/
  source
  dev='/dev/mapper/360022a11ecba5db427db0023'/
  target dev='sdb' bus='scsi'/
  address type='drive' controller='0' bus='0'/
   /disk
  
   controller type='scsi' index='0' model='virtio-scsi'/
  
   Currently,the related works in OpenStack as follows:
   1. block-device-mapping-v2 extension has already support
   the
  lun device with scsi bus type listed above, but cannot make the
  disk use virtio-scsi controller instead of default lsi scsi controller.
   2. libvirt-virtio-scsi-driver BP ([1]) whose milestone
   target is
  icehouse-3 is aim to support generate a virtio-scsi controller when
  using an image with virtio-scsi property, but it seems not to take
  boot-from-volume and attach-rdm-volume into account.
  
   I think it is meaningful if we provide the whole support
   for RDM
  feature in OpenStack.
  
   Any thoughts? Welcome any advices.
  
  
   [1]
   https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-dri
   ver
   --
   zhangleiqiang (Trump)
  
   Best Regards
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Regards
  Huang Zhiteng
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Regards
 Huang Zhiteng
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list

[openstack-dev] [qa] API schema location

2014-03-18 Thread Koderer, Marc
Hi Chris, hi all,

I just recognized that we have very similar interface definitions in
tempest/api_schema and etc/schema:

https://github.com/openstack/tempest/tree/master/etc/schemas
https://github.com/openstack/tempest/tree/master/tempest/api_schema

Any objections if I move them to a single location? I'd prefer to use json as
file format instead of .py. As final goal we should find a way how to merge
them competently but I feel like this is something for the design summit ;)

Regards,
Marc


DEUTSCHE TELEKOM AG
Digital Business Unit, Cloud Services (PI)
Marc Koderer
Cloud Technology Software Developer
T-Online-Allee 1, 64211 Darmstadt
E-Mail: m.kode...@telekom.de
www.telekom.com   

LIFE IS FOR SHARING. 

DEUTSCHE TELEKOM AG
Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
Board of Management: René Obermann (Chairman),
Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
Commercial register: Amtsgericht Bonn HRB 6794
Registered office: Bonn

BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY E-MAIL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] python-saharaclient 0.6.0 release [savanna]

2014-03-18 Thread Sergey Lukjanov
Hi folks,

the first version ofpython-saharaclient has been released.

The main change is renaming all stuff from savanna to sahara. This
release contains backward compatibility for using it as savanna
client.

https://launchpad.net/python-saharaclient/0.6.x/0.6.0
https://pypi.python.org/pypi/python-saharaclient/0.6.0
http://tarballs.openstack.org/python-saharaclient/python-saharaclient-0.6.0.tar.gz

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] python-saharaclient 0.6.0 release [savanna]

2014-03-18 Thread Sergey Lukjanov
python-saharaclient addition change request is under review -
https://review.openstack.org/#/c/81083/

On Tue, Mar 18, 2014 at 1:45 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi folks,

 the first version ofpython-saharaclient has been released.

 The main change is renaming all stuff from savanna to sahara. This
 release contains backward compatibility for using it as savanna
 client.

 https://launchpad.net/python-saharaclient/0.6.x/0.6.0
 https://pypi.python.org/pypi/python-saharaclient/0.6.0
 http://tarballs.openstack.org/python-saharaclient/python-saharaclient-0.6.0.tar.gz

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Thierry Carrez
Thomas Goirand wrote:
 We're now 1 month away from the scheduled release date. It is my strong
 opinion (as the main Debian OpenStack package maintainer) that for the
 last Havana release, the freeze of dependency happened really too late,
 creating issues hard to deal with on the packaging side. I believe it
 would be also hard to deal with for Ubuntu people (with the next LTS
 releasing soon).
 
 I'd be in the favor to freeze the dependencies for Icehouse *right now*
 (including version updates which aren't packaged yet in Debian).
 Otherwise, it may be very hard for me to get things pass the FTP masters
 NEW queue in time for new packages.

I'm all for it. In my view, dependency freeze should be a consequence of
feature freeze -- we should count any change that requires the addition
of a new dependency as a feature.

That said, the devil is in the details... There are bugs best fixed by
adding a library dep, there are version bumps, there are Oslo
libraries... I've added this topic for discussion at the Project/release
meeting today (21:00 UTC) so that we can hash out the details.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-18 Thread Sylvain Bauza
Hi Chris,


2014-03-18 0:36 GMT+01:00 Chris Friesen chris.frie...@windriver.com:

 On 03/17/2014 05:01 PM, Sylvain Bauza wrote:


 There are 2 distinct cases :
 1. there are multiple schedulers involved in the decision
 2. there is one single scheduler but there is a race condition on it



  About 1., I agree we need to see how the scheduler (and later on Gantt)
 could address decision-making based on distributed engines. At least, I
 consider the no-db scheduler blueprint responsible for using memcache
 instead of a relational DB could help some of these issues, as memcached
 can be distributed efficiently.


 With a central database we could do a single atomic transaction that looks
 something like select the first host A from list of hosts L that is not in
 the list of hosts used by servers in group G and then set the host field
 for server S to A.  In that context simultaneous updates can't happen
 because they're serialized by the central database.

 How would one handle the above for simultaneous scheduling operations
 without a centralized data store?  (I've never played with memcached, so
 I'm not really familiar with what it can do.)


See the rationale here for memcached-based scheduler :
https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
The idea is to leverage the capabilities of distributed memcached servers
with synchronization so that the decision would be scalable. As said in the
blueprint, another way would be to make use of RPC fanouts, but that's
something Openstack in general tries to avoid.




  About 2., that's a concurrency issue which can be addressed thanks to
 common practices for synchronizing actions. IMHO, a local lock can be
 enough for ensuring isolation


 It's not that simple though.  Currently the scheduler makes a decision,
 but the results of that decision aren't actually kept in the scheduler or
 written back to the db until much later when the instance is actually
 spawned on the compute node.  So when the next scheduler request comes in
 we violate the scheduling policy.  Local locking wouldn't help this.



Uh, you're right, missed that crucial point. That said, we should consider
that as a classlcal problem of placement with deferral action. One
possibility would be to consider that the host is locked to this group at
the scheduling decision time, even if the first instance hasn't yet booted.
Consider it as a cache with TTL if you wish. Thus, that implies the
scheduler would need to have a feedback value from the compute node saying
that the instance really booted. If no ACK comes from the compute node,
once the TTL vanishes, the lock is freed.

-Sylvain



 Chris




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][OVS Agent]

2014-03-18 Thread Mathieu Rohon
Hi nader,

The easiest way would be to register a new RPC callback in the current
ovs agent. This is what we have done for the l2-pop MD, with fdb_add
and fdb_remove callbacks.
However, it could become a mess if every MD adds it own callback
directly into the code of the agent. L2 agent should be able to load
drivers, which might register new callbacks.
This could potentially be something to do while refactoring the agent
: https://review.openstack.org/#/c/57627/

On Tue, Mar 18, 2014 at 7:42 AM, Nader Lahouti nader.laho...@gmail.com wrote:
 Hi All,

 In a multi-node setup, I'm using Ml2Plugin (as core plugin) and OVS
 (OVSNeutronAgent) as an agent on compute nodes. From controller I need to
 call a *new method* on agent ( on all compute nodes - using  RPC), to
 perform a task (i.e. to communicate with an external process). As I need to
 use OVSNeutronAgent, I am thinking the following as potential solution for
 adding the new method to the agent:
 1. Create new plugin based on existing OVS agent - That means cloning
 OVSNeutronAgent and add the new method to that.
 2. Create new plugin, which inherits OVSNeutronPlugin - the new plugin
 defines the new method, setup_rpc,...
 3. Add the new method to the existing OVSNeutronAgent

 Please let me know your thought and comments.

 Regards,
 Nader.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-18 Thread Christopher Yeoh
On Tue, 18 Mar 2014 10:39:19 +0100
Koderer, Marc m.kode...@telekom.de wrote:
 
 I just recognized that we have very similar interface definitions in
 tempest/api_schema and etc/schema:
 
 https://github.com/openstack/tempest/tree/master/etc/schemas
 https://github.com/openstack/tempest/tree/master/tempest/api_schema
 
 Any objections if I move them to a single location? I'd prefer to use
 json as file format instead of .py. As final goal we should find a
 way how to merge them competently but I feel like this is something
 for the design summit ;)
 

Heh we just moved them but I'm open to other suggestions - they are are
specific to API testing though aren't they? Long term the idea is that
they should be generated by Nova rather than tempest.  I think to
prevent unintentional changes we'd probably cache a copy in tempest
though rather than dynamically query them.

My feeling at the moment is that they should  .py files.
Because I think there's cases where we will want to have some schema
definitions based on others or share common parts and use bits of
python code to achieve this. For example availability zone list and
detailed listing  have a lot in common (detailed listing just has
a more parameters). I think there'll be similar cases for v2 and v3
versions as well.  While we're still manually generating them and
keeping them up to date I think it's worth sharing as much as we can.

I agree there's a lot of commonality and we should long term just have
one schema definition. There's quite a bit to discuss (eg level of
strictness is currently different) in this area and a summit session
about it would be very useful.

Regards,

Chris

 Regards,
 Marc
 
 
 DEUTSCHE TELEKOM AG
 Digital Business Unit, Cloud Services (PI)
 Marc Koderer
 Cloud Technology Software Developer
 T-Online-Allee 1, 64211 Darmstadt
 E-Mail: m.kode...@telekom.de
 www.telekom.com   
 
 LIFE IS FOR SHARING. 
 
 DEUTSCHE TELEKOM AG
 Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
 Board of Management: René Obermann (Chairman),
 Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
 Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
 Commercial register: Amtsgericht Bonn HRB 6794
 Registered office: Bonn
 
 BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY
 E-MAIL.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Sean Dague
On 03/18/2014 06:12 AM, Thierry Carrez wrote:
 Thomas Goirand wrote:
 We're now 1 month away from the scheduled release date. It is my strong
 opinion (as the main Debian OpenStack package maintainer) that for the
 last Havana release, the freeze of dependency happened really too late,
 creating issues hard to deal with on the packaging side. I believe it
 would be also hard to deal with for Ubuntu people (with the next LTS
 releasing soon).

 I'd be in the favor to freeze the dependencies for Icehouse *right now*
 (including version updates which aren't packaged yet in Debian).
 Otherwise, it may be very hard for me to get things pass the FTP masters
 NEW queue in time for new packages.
 
 I'm all for it. In my view, dependency freeze should be a consequence of
 feature freeze -- we should count any change that requires the addition
 of a new dependency as a feature.
 
 That said, the devil is in the details... There are bugs best fixed by
 adding a library dep, there are version bumps, there are Oslo
 libraries... I've added this topic for discussion at the Project/release
 meeting today (21:00 UTC) so that we can hash out the details.

Things which are currently outstanding on freeze.

Upstream still requires - SQLA  0.8. Thomas has forked debian to allow
0.9. I think we should resolve that before release.

Trove turned out to not be participating in global requirements, and has
3 items outside of requirements.

I also think we probably need a larger rethink of the
global-requirements process because I see a lot of review's bumping
minimum versions because some bugs are fixed upstream. And those all
seem to be sailing through. I think for incorrect reasons. No one's
objected at this point, so maybe that's ok. But it's probably worth a
huddle up.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-18 Thread Koderer, Marc
On Tue, 18 Marc 2014 12:00:00 +0100
Christopher Yeoh [mailto:cbky...@gmail.com] wrote:
 On Tue, 18 Mar 2014 10:39:19 +0100
 Koderer, Marc m.kode...@telekom.de wrote:
 
  I just recognized that we have very similar interface definitions in
  tempest/api_schema and etc/schema:
 
  https://github.com/openstack/tempest/tree/master/etc/schemas
  https://github.com/openstack/tempest/tree/master/tempest/api_schema
 
  Any objections if I move them to a single location? I'd prefer to use
  json as file format instead of .py. As final goal we should find a way
  how to merge them competently but I feel like this is something for
  the design summit ;)
 
 
 Heh we just moved them but I'm open to other suggestions - they are are
 specific to API testing though aren't they? Long term the idea is that
 they should be generated by Nova rather than tempest.  I think to prevent
 unintentional changes we'd probably cache a copy in tempest though rather
 than dynamically query them.

Sry that I didn't recognized this review.
Both definitions are coupled to API testing, yes.

 
 My feeling at the moment is that they should  .py files.
 Because I think there's cases where we will want to have some schema
 definitions based on others or share common parts and use bits of python
 code to achieve this. For example availability zone list and detailed
 listing  have a lot in common (detailed listing just has a more
 parameters). I think there'll be similar cases for v2 and v3 versions as
 well.  While we're still manually generating them and keeping them up to
 date I think it's worth sharing as much as we can.

Ok understood. We just converted the negative testing
definitions to json files due to review findings..
It's just very confusing for new people if they see
two separate folders with schema definitions.

But unfortunately there isn't an easy way.

 
 I agree there's a lot of commonality and we should long term just have one
 schema definition. There's quite a bit to discuss (eg level of strictness
 is currently different) in this area and a summit session about it would
 be very useful.
 

+1

 Regards,
 
 Chris
 
  Regards,
  Marc
 
 
  DEUTSCHE TELEKOM AG
  Digital Business Unit, Cloud Services (PI) Marc Koderer Cloud
  Technology Software Developer T-Online-Allee 1, 64211 Darmstadt
  E-Mail: m.kode...@telekom.de
  www.telekom.com
 
  LIFE IS FOR SHARING.
 
  DEUTSCHE TELEKOM AG
  Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman) Board of
  Management: René Obermann (Chairman), Reinhard Clemens, Niek Jan van
  Damme, Timotheus Höttges, Dr. Thomas Kremer, Claudia Nemat, Prof. Dr.
  Marion Schick Commercial register: Amtsgericht Bonn HRB 6794
  Registered office: Bonn
 
  BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY
  E-MAIL.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Globally-unique VM MAC address to do vendor-backed DHCP

2014-03-18 Thread Roman Verchikov
Hi stakers,

We’re trying to replace dnsmasq-supplied DHCP for tenant VMs with a vendor’s 
baremetal DHCP server. In order to pass DHCP request to a vendor’s server and 
send DHCP response back to VM we decided to add another OVS bridge (we called 
it br-dhcp), connected to integration bridge (br-int), which will have OVS 
rules connecting VM’s MAC address with br-dhcp port. In this scenario DHCP 
response will only find it’s way back to a VM if VM has globally-unique MAC 
address. 

My questions are: 
is having code which generates globally-unique MACs for VMs acceptable by the 
community at all?
is there a better solution to the problem (we also tried using dnsmasq as a 
DHCP relay there)?

Thanks,
Roman___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Updating libvirt in gate jobs

2014-03-18 Thread Davanum Srinivas
Hi Team,

We have 2 choices

1) Upgrade to libvirt 0.9.8+ (See [1] for details)
2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)

For #1, we received a patched deb from @SergeHallyn/@JamesPage and ran
tests on it in review https://review.openstack.org/#/c/79816/
For #2, @SergeHallyn/@JamesPage have updated UCA
(precise-proposed/icehouse) repo and we ran tests on it in review
https://review.openstack.org/#/c/74889/

For IceHouse, my recommendation is to request Ubuntu folks to push the
patched 0.9.8+ version we validated to public repos, then we can can
install/run gate jobs with that version. This is probably the smallest
risk of the 2 choices.

As soon as Juno begins, we can switch 1.2.2+ on UCA and request Ubuntu
folks to push the verified version where we can use it.

WDYT?

thanks,
dims

[1] https://bugs.launchpad.net/nova/+bug/1254872
[2] https://bugs.launchpad.net/nova/+bug/1228977

-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-03-18 Thread sahid
Sorry for the late of this response,

I'm currently working on a project called Warm.
https://wiki.openstack.org/wiki/Warm

It is used as a standalone client and try to deploy small OpenStack
environments from Yzml templates. You can find some samples here:
https://github.com/sahid/warm-templates

s.

- Original Message -
From: Charles Walker charles.walker...@gmail.com
To: openstack-dev@lists.openstack.org
Sent: Wednesday, February 26, 2014 2:47:44 PM
Subject: [openstack-dev] [Heat]Heat use as a standalone component for Cloud 
Managment over multi IAAS

Hi,


I am trying to deploy the proprietary application made in my company on the
cloud. The pre requisite for this is to have a IAAS which can be either a
public cloud or private cloud (openstack is an option for a private IAAS).


The first prototype I made was based on a homemade python orchestrator and
apache libCloud to interact with IAAS (AWS and Rackspace and GCE).

The orchestrator part is a python code reading a template file which
contains the info needed to deploy my application. This template file
indicates the number of VM and the scripts associated to each VM type to
install it.


Now I was trying to have a look on existing open source tool to do the
orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
https://wiki.openstack.org/wiki/Heat).

I am investigating deeper HEAT and also had a look on
https://wiki.openstack.org/wiki/Heat/DSL which mentioned:

*Cloud Service Provider* - A service entity offering hosted cloud services
on OpenStack or another cloud technology. Also known as a Vendor.


I think HEAT as its actual version will not match my requirement but I have
the feeling that it is going to evolve and could cover my needs.


I would like to know if it would be possible to use HEAT as a standalone
component in the future (without Nova and other Ostack modules)? The goal
would be to deploy an application from a template file on multiple cloud
service (like AWS, GCE).


Any feedback from people working on HEAT could help me.


Thanks, Charles.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-18 Thread Roshan Agrawal
+ 1

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Tuesday, March 18, 2014 12:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Proposed Core Reviewer Changes

Solum Cores,

I propose the following changes to the Solum core reviewer team:

+gokrokve
+julienvey
+devdatta-kulkarni
-kgriffs (inactivity)
-russelb (inactivity)

Please reply with your +1 votes to proceed with this change, or any remarks to 
the contrary.

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-18 Thread Ruslan Kamaldinov
Joshua, Clint,

The only platform I'm aware about which fully supports true isolation and which
has been used in production for this purpose is Java VM. I know people who
developed systems for online programming competitions and really smart kids
tried to break it without any luck :)

Since we're speaking about Heat, Mistral and Murano DSLs and all of them need an
execution engine. Do you think that JVM could become a host for that engine?

JVM has a lot of potential:
- it allows to fine-tune security manager to allow/disallow specific actions
- it can execute a lot of programming languages - Python, Ruby, JS, etc
- it has been used in production for this specific purpose for years

But it also introduces another layer of complexity:
- it's another component to deploy, configure and monitor
- it's non-python, which means it should be supported by infra
- we will need to run java service and potentially have some java code to
  accept and process user code


Thanks,
Ruslan

On Mon, Mar 17, 2014 at 10:40 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 So I guess this is similar to the other thread.

 http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html

 I know that the way YQL has provided it could be a good example; where the
 core DSL (the select queries and such) are augmented by the addition and
 usage of JS, for example
 http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
 (ignore that its XML, haha). Such usage already provides rate-limits and
 execution-limits
 (http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html) and
 afaik if something like what YQL is doing then u don't need to recreate
 simialr features in your DSL (and then u also don't need to teach people
 about a new language and syntax and ...)

 Just an idea (I believe lua offers similar controls/limits.., although its
 not as popular of course as JS).

 From: Stan Lagun sla...@mirantis.com

 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 17, 2014 at 3:59 AM

 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?

 Joshua,

 Completely agree with you. We wouldn't be writing another language if we
 knew how any of existing languages can be used for this particular purpose.
 If anyone suggest such language and show how it can be used to solve those
 issues DSL was designed to solve we will consider dropping MuranoPL. np

 Surely DSL hasn't stood the test of time. It just hasn't had a chance yet.
 100% of successful programming languages were in such position once.

 Anyway it is the best time to come with your suggestions. If you know how
 exactly DSL can be replaced or improved we would like you to share


 On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow harlo...@yahoo-inc.com
 wrote:

 I guess I might be a bit biased to programming; so maybe I'm not the
 target audience.

 I'm not exactly against DSL's, I just think that DSL's need to be really
 really proven to become useful (in general this applies to any language that
 'joe' comp-sci student can create). Its not that hard to just make one, but
 the real hard part is making one that people actually like and use and
 survives the test of time. That's why I think its just nicer to use
 languages that have stood the test of time already (if we can), creating a
 new DSL (muranoPL seems to be slightly more than a DSL imho) means creating
 a new language that has not stood the test of time (in terms of lifetime,
 battle tested, supported over years) so that's just the concern I have.

 Of course we have to accept innovation and I hope that the DSL/s makes it
 easier/simpler, I just tend to be a bit more pragmatic maybe in this area.

 Here's hoping for the best! :-)

 -Josh

 From: Renat Akhmerov rakhme...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 8:36 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?

 Although being a little bit verbose it makes a lot of sense to me.

 @Joshua,

 Even assuming Python could be sandboxed and whatever else that's needed to
 be able to use it as DSL (for something like Mistral, Murano or Heat) is
 done  why do you think Python would be a better alternative for people who
 don't know neither these new DSLs nor Python itself. Especially, given the
 fact that Python has A LOT of things that they'd never use. I know many
 people who have been programming in Python for a while and they admit they
 don't know all the nuances of Python and actually use 30-40% of all of its
 capabilities. Even not in domain specific development. So narrowing a
 feature set that a language provides and limiting it to a 

[openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Nadya Privalova
Hi folks,

I'd like to discuss Ceilometer's tempest situation with you.
Now we have several patch sets on review that test core functionality of
Ceilometer: notificaton and pollstering (topic
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/add-basic-ceilometer-tests,n,z).
But there is a problem: Ceilometer performance is very poor on mysql and
postgresql because of the bug
https://bugs.launchpad.net/ceilometer/+bug/1291054. Mongo behaves much
better even in single thread and I hope that it's performance will be
enough to successfully run Ceilometer tempest tests.
Let me explain in several words why tempest tests is mostly performance
tests for Ceilometer. The thing is that Ceilometer service is running
during all other nova, cinder and so on tests run. All the tests create
instances, volumes and each creation produces a lot of notifications. Each
notification is the entry to database. So Ceilometer cannot process such a
big amount of notifications quickly. Ceilometer tests have 'telemetry'
prefix and it means that they will be started in the last turn. And it
makes situation even worst.
So my proposal:
1. create a non-voting job with Mongo-backend
2. make sure that tests pass on Mongo
3. merge tests to tempest but skip that on postgres and mysql till
bug/1291054 is resolved
4. make the new job 'voting'

The problem is only in Mongo installation. I have a cr
https://review.openstack.org/#/c/81001/ that will allow us to install Mongo
from deb. From the other hand there is
https://review.openstack.org/#/c/74889/ that enables UCA. I'm collaborating
with infra-team to make the decision ASAP because AFAIU we need tempest
tests in Icehouse (for more discussion you are welcome to thread
[openstack-dev] Updating libvirt in gate jobs).

If you have any thoughts on this please share.

Thanks for attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Thomas Goirand
On 03/18/2014 02:51 PM, Andreas Jaeger wrote:
 On 03/18/2014 07:14 AM, Thomas Goirand wrote:
 Hi,

 We're now 1 month away from the scheduled release date. It is my strong
 opinion (as the main Debian OpenStack package maintainer) that for the
 last Havana release, the freeze of dependency happened really too late,
 creating issues hard to deal with on the packaging side. I believe it
 would be also hard to deal with for Ubuntu people (with the next LTS
 releasing soon).

 I'd be in the favor to freeze the dependencies for Icehouse *right now*
 (including version updates which aren't packaged yet in Debian).
 Otherwise, it may be very hard for me to get things pass the FTP masters
 NEW queue in time for new packages.
 
 I expect that a couple of python-PROJECTclient packages needs to be
 released first. I've seen changes to them that are not released and that
 might be important for Icehouse to support new features.

These aren't a problem for me, their version can change, and update is
easy and fast to do.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-18 Thread Renat Akhmerov
+1

Renat Akhmerov
@ Mirantis Inc.

On 18 Mar 2014, at 19:01, Ruslan Kamaldinov rkamaldi...@mirantis.com wrote:

 Joshua, Clint,
 
 The only platform I'm aware about which fully supports true isolation and 
 which
 has been used in production for this purpose is Java VM. I know people who
 developed systems for online programming competitions and really smart kids
 tried to break it without any luck :)
 
 Since we're speaking about Heat, Mistral and Murano DSLs and all of them need 
 an
 execution engine. Do you think that JVM could become a host for that engine?
 
 JVM has a lot of potential:
 - it allows to fine-tune security manager to allow/disallow specific actions
 - it can execute a lot of programming languages - Python, Ruby, JS, etc
 - it has been used in production for this specific purpose for years
 
 But it also introduces another layer of complexity:
 - it's another component to deploy, configure and monitor
 - it's non-python, which means it should be supported by infra
 - we will need to run java service and potentially have some java code to
  accept and process user code
 
 
 Thanks,
 Ruslan
 
 On Mon, Mar 17, 2014 at 10:40 PM, Joshua Harlow harlo...@yahoo-inc.com 
 wrote:
 So I guess this is similar to the other thread.
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html
 
 I know that the way YQL has provided it could be a good example; where the
 core DSL (the select queries and such) are augmented by the addition and
 usage of JS, for example
 http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
 (ignore that its XML, haha). Such usage already provides rate-limits and
 execution-limits
 (http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html) and
 afaik if something like what YQL is doing then u don't need to recreate
 simialr features in your DSL (and then u also don't need to teach people
 about a new language and syntax and ...)
 
 Just an idea (I believe lua offers similar controls/limits.., although its
 not as popular of course as JS).
 
 From: Stan Lagun sla...@mirantis.com
 
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 17, 2014 at 3:59 AM
 
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?
 
 Joshua,
 
 Completely agree with you. We wouldn't be writing another language if we
 knew how any of existing languages can be used for this particular purpose.
 If anyone suggest such language and show how it can be used to solve those
 issues DSL was designed to solve we will consider dropping MuranoPL. np
 
 Surely DSL hasn't stood the test of time. It just hasn't had a chance yet.
 100% of successful programming languages were in such position once.
 
 Anyway it is the best time to come with your suggestions. If you know how
 exactly DSL can be replaced or improved we would like you to share
 
 
 On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow harlo...@yahoo-inc.com
 wrote:
 
 I guess I might be a bit biased to programming; so maybe I'm not the
 target audience.
 
 I'm not exactly against DSL's, I just think that DSL's need to be really
 really proven to become useful (in general this applies to any language that
 'joe' comp-sci student can create). Its not that hard to just make one, but
 the real hard part is making one that people actually like and use and
 survives the test of time. That's why I think its just nicer to use
 languages that have stood the test of time already (if we can), creating a
 new DSL (muranoPL seems to be slightly more than a DSL imho) means creating
 a new language that has not stood the test of time (in terms of lifetime,
 battle tested, supported over years) so that's just the concern I have.
 
 Of course we have to accept innovation and I hope that the DSL/s makes it
 easier/simpler, I just tend to be a bit more pragmatic maybe in this area.
 
 Here's hoping for the best! :-)
 
 -Josh
 
 From: Renat Akhmerov rakhme...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 8:36 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?
 
 Although being a little bit verbose it makes a lot of sense to me.
 
 @Joshua,
 
 Even assuming Python could be sandboxed and whatever else that's needed to
 be able to use it as DSL (for something like Mistral, Murano or Heat) is
 done  why do you think Python would be a better alternative for people who
 don't know neither these new DSLs nor Python itself. Especially, given the
 fact that Python has A LOT of things that they'd never use. I know many
 people who have been programming in Python for a while and they admit they
 don't know all the nuances of Python and 

Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Thomas Goirand
On 03/18/2014 06:12 PM, Thierry Carrez wrote:
 Thomas Goirand wrote:
 We're now 1 month away from the scheduled release date. It is my strong
 opinion (as the main Debian OpenStack package maintainer) that for the
 last Havana release, the freeze of dependency happened really too late,
 creating issues hard to deal with on the packaging side. I believe it
 would be also hard to deal with for Ubuntu people (with the next LTS
 releasing soon).

 I'd be in the favor to freeze the dependencies for Icehouse *right now*
 (including version updates which aren't packaged yet in Debian).
 Otherwise, it may be very hard for me to get things pass the FTP masters
 NEW queue in time for new packages.
 
 I'm all for it. In my view, dependency freeze should be a consequence of
 feature freeze -- we should count any change that requires the addition
 of a new dependency as a feature.
 
 That said, the devil is in the details... There are bugs best fixed by
 adding a library dep, there are version bumps, there are Oslo
 libraries... I've added this topic for discussion at the Project/release
 meeting today (21:00 UTC) so that we can hash out the details.

There's a few level of difficulties.

1- Upgrading anything maintained by OpenStack (Oslo libs, python-client*
packages, etc.) isn't a problem.

2- Update for anything in the QA page of the OpenStack Debian packaging
team [1] is less of a problem.

3- Updating anything that is team-maintained in the Python Module team,
then I'm less comfortable.

4- Updating anything that is not maintained in any team in Debian is
problematic.

5- Adding a new Python module that doesn't exist in Debian at all for
the moment is *REALLY* a *BIG* issue, because it would go through the
FTP master new queue.

Not freezing dependencies for 1- until the release is ok, 2- should be
frozen at some point (let's say 2 weeks before the release?), for all
other cases, I think we should consider that shouldn't do modifications.

On 03/18/2014 07:28 PM, Sean Dague wrote:
 Things which are currently outstanding on freeze.

 Upstream still requires - SQLA  0.8. Thomas has forked debian to
 allow 0.9. I think we should resolve that before release.

I of course agree with this.

 Trove turned out to not be participating in global requirements, and
 has 3 items outside of requirements.

Could you list them?

 I also think we probably need a larger rethink of the
 global-requirements process because I see a lot of review's bumping
 minimum versions because some bugs are fixed upstream. And those all
 seem to be sailing through. I think for incorrect reasons. No one's
 objected at this point, so maybe that's ok. But it's probably worth a
 huddle up.

What would be the way to fix it then?

Cheers,

Thomas Goirand (zigo)

[1]
http://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Sean Dague
On 03/18/2014 08:09 AM, Nadya Privalova wrote:
 Hi folks,
 
 I'd like to discuss Ceilometer's tempest situation with you.
 Now we have several patch sets on review that test core functionality of
 Ceilometer: notificaton and pollstering (topic
 https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/add-basic-ceilometer-tests,n,z).
 But there is a problem: Ceilometer performance is very poor on mysql and
 postgresql because of the bug
 https://bugs.launchpad.net/ceilometer/+bug/1291054. Mongo behaves much
 better even in single thread and I hope that it's performance will be
 enough to successfully run Ceilometer tempest tests.
 Let me explain in several words why tempest tests is mostly performance
 tests for Ceilometer. The thing is that Ceilometer service is running
 during all other nova, cinder and so on tests run. All the tests create
 instances, volumes and each creation produces a lot of notifications.
 Each notification is the entry to database. So Ceilometer cannot process
 such a big amount of notifications quickly. Ceilometer tests have
 'telemetry' prefix and it means that they will be started in the last
 turn. And it makes situation even worst.
 So my proposal:
 1. create a non-voting job with Mongo-backend
 2. make sure that tests pass on Mongo
 3. merge tests to tempest but skip that on postgres and mysql till
 bug/1291054 is resolved
 4. make the new job 'voting'
 
 The problem is only in Mongo installation. I have a cr
 https://review.openstack.org/#/c/81001/ that will allow us to install
 Mongo from deb. From the other hand there is
 https://review.openstack.org/#/c/74889/ that enables UCA. I'm
 collaborating with infra-team to make the decision ASAP because AFAIU we
 need tempest tests in Icehouse (for more discussion you are welcome to
 thread  [openstack-dev] Updating libvirt in gate jobs).
 
 If you have any thoughts on this please share.

There is a fundamental problem here that the Ceilometer team requires a
version of Mongo that's not provided by the distro. We've taken a pretty
hard line on not requiring newer versions of non python stuff than the
distros we support actually have.

And the SQL backend is basically unusable from what I can tell.

So I'm -2 on injecting an arbitrary upstream Mongo in devstack.

What is preventing Ceilometer from bringing back support for the mongo
that you can get from 12.04? That seems like it should be the much
higher priority item. Then we could actually be gating Ceilometer
features on what the platforms can actually support. Then I'd be happy
to support a Mongo job running in tests.

Once that was done, we can start unpacking some of the other issues.

I'm not sure how changing to using 4 cores in the gate is going to
reduce the list command from 120s to 2s, so that doesn't really seem to
be the core issue (and is likely to just cause db deadlocks).

As long as Ceilometer says it supports SQL backends, it needs to do so
in a sane way. So that should still be gating.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?

2014-03-18 Thread Doug Hellmann
On Tue, Mar 18, 2014 at 1:37 AM, Angus Salkeld
angus.salk...@rackspace.comwrote:

 On 18/03/14 07:39 +0530, Noorul Islam Kamal Malmiyoda wrote:

 On Tue, Mar 18, 2014 at 4:59 AM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Doug Hellmann and Victror Stinner (+ oslo cores),

 Solum currently depends on a py33 gate. We want to use oslo.messaging,
 but are worried that in the current state, we will be stuck without py33
 support. We hope that by adding the Trollius code[1], and getting a new
 release number, that we can add the oslo.messaging library to our
 requirements and proceed with our async messaging plan.

 I am seeking guidance from you on when the above might happen. If it's a
 short time, we may just wait for it. If it's a long time, we may need to
 relax our py33 gate to non-voting in order to prevent breakage of our
 Stackforge CI while we work with the oslo.messaging code. We are also
 considering doing an ugly workaround of creating a bunch of worker
 processes on the same messaging topic until we can clear this concern.

 Thoughts?


 I think we should not make python33 gate non-voting as we will miss
 out issues related others. We can toggle the oslo.messaging related
 tests to not run in python33.


 Even if we disable our tests, we can't even add oslo.messaging to
 our requirements.txt as it can't even be installed.


Actually, Julien recently added support to pbr for separate requirements
files for python 3 (requirements-py3.txt and test-requirements-py3.txt). If
the python 3 file exists, the default file is not used at all, so it is
possible to list completely different requirements for python 2 and 3.

Doug




 The only practical solution I can see is to make py33 non-voting until
 oslo.messaging
 can handle py33.

 -Angus



 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] OPENSTACK SERVICE ERROR

2014-03-18 Thread Maldonado, Facundo N
Couple of questions,


-  Do you have n-cpu running in every node? (controller and both 
computes)

-  Which services do you have enabled in both compute nodes?

-  To which node corresponds the screen log provided?

-  What is the output of nova-manage service list?

Facundo.

From: Ben Nemec [mailto:openst...@nemebean.com]
Sent: Monday, March 17, 2014 1:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] OPENSTACK SERVICE ERROR


First, please don't use all caps in your subject.  Second, please do use tags 
to indicate which projects your message relates to.  In this case, that appears 
to be nova.

As far as the error, it looks like you may have some out of date code.  The 
line referenced in the backtrace is now line 1049 in api.py, not 953.  That 
suggests to me that there have been some pretty significant changes since 
whatever git revision you're currently using.

-Ben

On 2014-03-15 08:15, abhishek jain wrote:
Hi all
I have installed openstack using devstack and nearly all the functionality is 
working fine.
However I'm getting an issue during live migration .
I'm creating a stack of one controller node and two compute nodes i.e Compute 
node 1 and compute node 2 .I'm booting one VM at compute node 1 and need to 
migrate it over compute node 2.
For this I'm using NFS which is working fine.

Also I have enabled 
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
 in /etc/nova.conf over both the compute nodes and over controller node.

However when I apply nova live-migration command after restarting the 
nova-compute service using screen session the VM is not able to migrate.
Below are the logs after restarting nova-compute service ..

16:33.500 2599 TRACE nova.openstack.common.rpc.amqp
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 953, in 
_nw_info_build_network
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
label=network_name,
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
UnboundLocalError: local variable 'network_name' referenced before assignment
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp
2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp

 10$(L) n-cpu*  11$(L) n-cond  12$(L) n-crt  13$(L) n-sch  14$(L) n-novnc  
15$(L) n-xvnc  16$(L) n-cauth
Also find the attached screanshot describing the complete error.
Please help regarding this.


Thanks
Abhishek Jain




___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Julien Danjou
On Tue, Mar 18 2014, Sean Dague wrote:

 There is a fundamental problem here that the Ceilometer team requires a
 version of Mongo that's not provided by the distro. We've taken a pretty
 hard line on not requiring newer versions of non python stuff than the
 distros we support actually have.

MongoDB 2.4 is in UCA for a while now. We just can't use it because of
libvirt bug https://bugs.launchpad.net/nova/+bug/1228977.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?

2014-03-18 Thread Doug Hellmann
In addition to the installation requirements, we also need to deal with the
code-level changes. For example, when using the eventlet executor eventlet
needs to have been imported very early by the application so it can
monkeypatch the I/O libraries. When not using the eventlet executor, that
monkeypatching should not be done because it will interfere with the
regular I/O. So now we have an operation that needs to happen during code
initialization that is dictated by a configuration option (which executor
is being used) only available after all of the code initialization has
happened.

My first impression is that when we have an executor that works with
asyncio/trollius we will want to drop eventlet entirely, but that's just a
first impression. There may be similar trade-offs with the other libraries.

Victor, what do you think?

Doug


On Mon, Mar 17, 2014 at 9:52 PM, Davanum Srinivas dava...@gmail.com wrote:

 Adrian,

 We are too close to icehouse release candidates to bump up global
 requirements with new rev of oslo.messaging. So even if we all agree
 and cut a new rev of oslo.messaging it's too late for icehouse as
 release candidates are rolling next week.

 I'd definitely support a way to get python33 support via trollius or
 thread-exec that Joshua pointed out very soon. It may be a good idea
 to keep solum's py33 non-voting till we nail down all other other
 dependencies, so +1 for that as well.

 -- dims

 On Mon, Mar 17, 2014 at 7:29 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
  Doug Hellmann and Victror Stinner (+ oslo cores),
 
  Solum currently depends on a py33 gate. We want to use oslo.messaging,
 but are worried that in the current state, we will be stuck without py33
 support. We hope that by adding the Trollius code[1], and getting a new
 release number, that we can add the oslo.messaging library to our
 requirements and proceed with our async messaging plan.
 
  I am seeking guidance from you on when the above might happen. If it's a
 short time, we may just wait for it. If it's a long time, we may need to
 relax our py33 gate to non-voting in order to prevent breakage of our
 Stackforge CI while we work with the oslo.messaging code. We are also
 considering doing an ugly workaround of creating a bunch of worker
 processes on the same messaging topic until we can clear this concern.
 
  Thoughts?
 
  Thanks,
 
  Adrian
 
  [1] https://review.openstack.org/70948
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-18 Thread Russell Bryant
On 03/17/2014 01:54 PM, John Garbutt wrote:
 On 15 March 2014 18:39, Chris Friesen chris.frie...@windriver.com wrote:
 Hi,

 I'm curious why the specified git commit chose to fix the anti-affinity race
 condition by aborting the boot and triggering a reschedule.

 It seems to me that it would have been more elegant for the scheduler to do
 a database transaction that would atomically check that the chosen host was
 not already part of the group, and then add the instance (with the chosen
 host) to the group.  If the check fails then the scheduler could update the
 group_hosts list and reschedule.  This would prevent the race condition in
 the first place rather than detecting it later and trying to work around it.

 This would require setting the host field in the instance at the time of
 scheduling rather than the time of instance creation, but that seems like it
 should work okay.  Maybe I'm missing something though...
 
 We deal with memory races in the same way as this today, when they
 race against the scheduler.
 
 Given the scheduler split, writing that value into the nova db from
 the scheduler would be a step backwards, and it probably breaks lots
 of code that assumes the host is not set until much later.

This is exactly the reason I did it this way.  It fits the existing
pattern with how we deal with host scheduling races today.  We do the
final claiming and validation on the compute node itself and kick back
to the scheduler if something doesn't work out.  Alternatives are *way*
too risky to be doing in feature freeze, IMO.

I think it's great to see discussion of better ways to approach these
things, but it would have to be Juno work.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Sean Dague
On 03/18/2014 09:02 AM, Julien Danjou wrote:
 On Tue, Mar 18 2014, Sean Dague wrote:
 
 There is a fundamental problem here that the Ceilometer team requires a
 version of Mongo that's not provided by the distro. We've taken a pretty
 hard line on not requiring newer versions of non python stuff than the
 distros we support actually have.
 
 MongoDB 2.4 is in UCA for a while now. We just can't use it because of
 libvirt bug https://bugs.launchpad.net/nova/+bug/1228977.

We've not required UCA for any other project to pass the gate. So what
is the issue with Mongo 2.0.4 that makes it unsupportable in ceilometer?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Julien Danjou
On Tue, Mar 18 2014, Sean Dague wrote:

 We've not required UCA for any other project to pass the gate. So what
 is the issue with Mongo 2.0.4 that makes it unsupportable in ceilometer?

We require features not present in MongoDB  2.2.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-18 Thread Sylvain Bauza
2014-03-18 14:07 GMT+01:00 Russell Bryant rbry...@redhat.com:


 I think it's great to see discussion of better ways to approach these
 things, but it would have to be Juno work.


+1. There are various blueprints about the scheduler in progress, related
to either splitting it out or scaling it, and IMHO this concurrency problem
should be discussed during the Juno summit in order to make sure there
won't be duplicate efforts.

-Sylvain

--
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pbr 0.7.0 released

2014-03-18 Thread Doug Hellmann
This is mostly a bug-fix release, but it does include some requirements
changes so we bumped the minor version number.

* Factor run_cmd out of the base class
* Return the real class in VersionInfo __repr__
* Fix up some docstrings
* Init sphinx config values before accessing them
* Remove copyright from empty files
* Declare support for Python versions in setup.cfg
* Updated from global requirements
* Remove unused _parse_mailmap()
* Add support for python 3-3.3
* Remove tox locale overrides
* Do not force log verbosity level to info

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Updating libvirt in gate jobs

2014-03-18 Thread Daniel P. Berrange
On Tue, Mar 18, 2014 at 07:50:15AM -0400, Davanum Srinivas wrote:
 Hi Team,
 
 We have 2 choices
 
 1) Upgrade to libvirt 0.9.8+ (See [1] for details)
 2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)
 
 For #1, we received a patched deb from @SergeHallyn/@JamesPage and ran
 tests on it in review https://review.openstack.org/#/c/79816/
 For #2, @SergeHallyn/@JamesPage have updated UCA
 (precise-proposed/icehouse) repo and we ran tests on it in review
 https://review.openstack.org/#/c/74889/
 
 For IceHouse, my recommendation is to request Ubuntu folks to push the
 patched 0.9.8+ version we validated to public repos, then we can can
 install/run gate jobs with that version. This is probably the smallest
 risk of the 2 choices.

If we've re-run the tests in that review enough times to be confident
we've had a chance of exercising the race conditions, then using the
patched 0.9.8 seems like a no-brainer. We know the current version in
ubuntu repos is broken for us, so the sooner we address that the better.

 As soon as Juno begins, we can switch 1.2.2+ on UCA and request Ubuntu
 folks to push the verified version where we can use it.

This basically re-raises the question of /what/ we should be testing in
the gate, which was discussed on this list a few weeks ago, and I'm not
clear that there was a definite decision in that thread

  http://lists.openstack.org/pipermail/openstack-dev/2014-February/027734.html

Testing the lowest vs highest is targetting two different scenarios

  - Testing the lowest version demonstrates that OpenStack has not
broken its own code by introducing use of a new feature.

  - Testing the highest version demonstrates that OpenStack has not
been broken by 3rd party code introducing a regression.

I think it is in scope for openstack to be targetting both of these
scenarios. For anything in-between though, it is upto the downstream
vendors to test their precise combination of versions. Currently though
our testing policy for non-python bits is whatever version ubuntu ships,
which may be neither the lowest or highest versions, just some arbitrary
version they wish to support. So this discussion is currently more of a
'what ubuntu version should we test on' kind of decision

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Heat] How to reliably detect VM failures?

2014-03-18 Thread Qiming Teng
Hi, Folks,

  I have been trying to implement a HACluster resource type in Heat. I
haven't created a BluePrint for this because I am not sure everything
will work as expected.

  The basic idea is to extend the OS::Heat::ResourceGroup resource type
with inner resource types fixed to be OS::Nova::Server.  Properties for
this HACluster resource may include:

  - init_size: initial number of Server instances;
  - min_size: minimal number of Server instances;
  - sig_handler: a reference to a sub-class of SignalResponder;
  - zones: a list of strings representing the availability zones, which 
  could be a names of the rack where the Server can be booted;
  - recovery_action: a list of supported failure recovery actions, such
  as 'restart', 'remote-restart', 'migrate';
  - fencing_options: a dict specifying what to do to shutdown the Server
  in a clean way so that data consistency in storage and network are
  reserved;
  - resource_ref: a dict for defining the Server instances to be
  created.

  Attributes of the HACluster may include:
  - refs: a list of resource IDs for the currently active Servers;
  - ips: a list of IP addresses for convenience.

  Note that the 'remote-restart' action above is today referred to as
'evacuate'.

  The most difficult issue here is to come up with a reliable VM failure
detection mechanism.  The service_group feature in Nova only concerns
about the OpenStack services themselves, not the VMs.  Considering that
in our customer's cloud environment, user provided images can be used,
we cannot assume some agents in the VMs to send heartbeat signals.

  I have checked the 'instance' table in Nova database, it seemed that
the 'update_at' column is only updated when VM state changed and
reported.  If the 'heartbeat' messages are coming in from many VMs very
frequently, there could be a DB query performance/scalability issue,
right?

  So, how can I detect VM failures reliably, so that I can notify Heat
to take the appropriate recovery action?

Regards,
  - Qiming

Research Scientist
IBM Research - China
tengqim at cn dot ibm dot com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?

2014-03-18 Thread victor stinner
Hi,

My patches for Oslo Messaging are naive and inefficient. They are just a first 
step to prepare OpenStack for Trollius. I chose to run asyncio event loop in 
its own dedicated thread and dispatch messages in a pool of threads. It would 
be nice to run asyncio event loop in the main thread, as the last blocker 
function of the main() function of applications. You can already pass your 
asyncio event loop with my patch, the thread is only created if you don't pass 
the loop.

I wrote that my patch is inefficient because it calls Olso Messaging listener 
with a short timeout (ex: 500 ms) in a busy-loop. The timeout is needed to be 
able to exit the threads. Later, each driver should be modified to give access 
to the low-level file descriptor, so asyncio can react to ready to read and 
ready to write events.

None of my Trollius patches has been merged yet. I have no experience of 
Trollius cooperating (or not) with eventlet yet.

There are different options:

- run asyncio tasks in an eventlet event loop
- run eventlet tasks in an asyncio event loop
- run asyncio and eventlet in two separated event loops (eventlet API is not 
written as an event loop), typically in two threads

The best would be to use asyncio with non-blocking file descriptors, without 
eventlet, and register these file descriptors in asyncio. But it cannot be done 
right now, it requires too much changes.

Sorry, I have no concrete code to show you right now.

I stopped working on Trollius in OpenStack because I was asked to wait after 
Icehouse release.

Victor

 In addition to the installation requirements, we also need to deal with the
 code-level changes. For example, when using the eventlet executor eventlet
 needs to have been imported very early by the application so it can
 monkeypatch the I/O libraries. When not using the eventlet executor, that
 monkeypatching should not be done because it will interfere with the
 regular I/O. So now we have an operation that needs to happen during code
 initialization that is dictated by a configuration option (which executor
 is being used) only available after all of the code initialization has
 happened.
 
 My first impression is that when we have an executor that works with
 asyncio/trollius we will want to drop eventlet entirely, but that's just a
 first impression. There may be similar trade-offs with the other libraries.
 
 Victor, what do you think?



 
 Doug
 
 
 On Mon, Mar 17, 2014 at 9:52 PM, Davanum Srinivas dava...@gmail.com wrote:
 
  Adrian,
 
  We are too close to icehouse release candidates to bump up global
  requirements with new rev of oslo.messaging. So even if we all agree
  and cut a new rev of oslo.messaging it's too late for icehouse as
  release candidates are rolling next week.
 
  I'd definitely support a way to get python33 support via trollius or
  thread-exec that Joshua pointed out very soon. It may be a good idea
  to keep solum's py33 non-voting till we nail down all other other
  dependencies, so +1 for that as well.
 
  -- dims
 
  On Mon, Mar 17, 2014 at 7:29 PM, Adrian Otto adrian.o...@rackspace.com
  wrote:
   Doug Hellmann and Victror Stinner (+ oslo cores),
  
   Solum currently depends on a py33 gate. We want to use oslo.messaging,
  but are worried that in the current state, we will be stuck without py33
  support. We hope that by adding the Trollius code[1], and getting a new
  release number, that we can add the oslo.messaging library to our
  requirements and proceed with our async messaging plan.
  
   I am seeking guidance from you on when the above might happen. If it's a
  short time, we may just wait for it. If it's a long time, we may need to
  relax our py33 gate to non-voting in order to prevent breakage of our
  Stackforge CI while we work with the oslo.messaging code. We are also
  considering doing an ugly workaround of creating a bunch of worker
  processes on the same messaging topic until we can clear this concern.
  
   Thoughts?
  
   Thanks,
  
   Adrian
  
   [1] https://review.openstack.org/70948
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Davanum Srinivas :: http://davanum.wordpress.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Updating libvirt in gate jobs

2014-03-18 Thread Solly Ross
I agree with Dan -- I think it's important to test on newer versions as well, 
considering we will have people running on other versions besides Ubuntu LTS -- 
Fedora 20, for instance, is on 1.1.3.4.

Additionally, considering bugs get fixed and features get implemented in each 
version of libvirt, we need to ensure that we *can* test code that uses 
features present in later versions of libvirt.  0.9.8 came out over two years 
ago making it fairly old.  I think it's important to keep up-to-date on what 
versions we test with.

Best Regards,
Solly Ross

- Original Message -
From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, March 18, 2014 10:11:54 AM
Subject: Re: [openstack-dev] Updating libvirt in gate jobs

On Tue, Mar 18, 2014 at 07:50:15AM -0400, Davanum Srinivas wrote:
 Hi Team,
 
 We have 2 choices
 
 1) Upgrade to libvirt 0.9.8+ (See [1] for details)
 2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)
 
 For #1, we received a patched deb from @SergeHallyn/@JamesPage and ran
 tests on it in review https://review.openstack.org/#/c/79816/
 For #2, @SergeHallyn/@JamesPage have updated UCA
 (precise-proposed/icehouse) repo and we ran tests on it in review
 https://review.openstack.org/#/c/74889/
 
 For IceHouse, my recommendation is to request Ubuntu folks to push the
 patched 0.9.8+ version we validated to public repos, then we can can
 install/run gate jobs with that version. This is probably the smallest
 risk of the 2 choices.

If we've re-run the tests in that review enough times to be confident
we've had a chance of exercising the race conditions, then using the
patched 0.9.8 seems like a no-brainer. We know the current version in
ubuntu repos is broken for us, so the sooner we address that the better.

 As soon as Juno begins, we can switch 1.2.2+ on UCA and request Ubuntu
 folks to push the verified version where we can use it.

This basically re-raises the question of /what/ we should be testing in
the gate, which was discussed on this list a few weeks ago, and I'm not
clear that there was a definite decision in that thread

  http://lists.openstack.org/pipermail/openstack-dev/2014-February/027734.html

Testing the lowest vs highest is targetting two different scenarios

  - Testing the lowest version demonstrates that OpenStack has not
broken its own code by introducing use of a new feature.

  - Testing the highest version demonstrates that OpenStack has not
been broken by 3rd party code introducing a regression.

I think it is in scope for openstack to be targetting both of these
scenarios. For anything in-between though, it is upto the downstream
vendors to test their precise combination of versions. Currently though
our testing policy for non-python bits is whatever version ubuntu ships,
which may be neither the lowest or highest versions, just some arbitrary
version they wish to support. So this discussion is currently more of a
'what ubuntu version should we test on' kind of decision

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] OPENSTACK SERVICE ERROR

2014-03-18 Thread abhishek jain
Hi Ben

Thanks.I'll take care of Formatting in future.Moreover  I'll pull the
latest code and will let you know the result soon.

Hi Facundo

Following are the answers of your questions...

Do you have n-cpu running in every node? (controller and both computes)

--YES I have n-cpu running in every node.

Which services do you have enabled in both compute nodes?

--n-novnc and n-cpu are the two services running at the two compute nodes.

   To which node corresponds the screen log provided?

-- The logs corresponds to the the compute node at which the n-cpu service
is started.

What is the output of nova-manage service list?

--Following is the output of nova-manage service list

nova-conductor   fedora5  internal
enabled:-)   2014-03-18 14:38:43
2014-03-18 15:38:54.553 DEBUG nova.servicegroup.api
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] Check if the given
member [{'binary': u'nova-cert', 'availability_zone': 'internal',
'deleted': 0L, 'created_at': datetime.datetime(2014, 3, 18, 11, 45),
'updated_at': datetime.datetime(2014, 3, 18, 14, 38, 51), 'report_count':
1043L, 'topic': u'cert', 'host': u'fedora5', 'disabled': False,
'deleted_at': None, 'disabled_reason': None, 'id': 2L}] is part of the
ServiceGroup, is up service_is_up /opt/stack/nova/nova/
servicegroup/api.py:94
2014-03-18 15:38:54.553 DEBUG nova.servicegroup.drivers.db
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] DB_Driver.is_up
last_heartbeat = 2014-03-18 14:38:51 elapsed = 3.553656 is_up
/opt/stack/nova/nova/servicegroup/drivers/db.py:71
nova-certfedora5  internal
enabled:-)   2014-03-18 14:38:51
2014-03-18 15:38:54.554 DEBUG nova.servicegroup.api
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] Check if the given
member [{'binary': u'nova-scheduler', 'availability_zone': 'internal',
'deleted': 0L, 'created_at': datetime.datetime(2014, 3, 18, 11, 45, 1),
'updated_at': datetime.datetime(2014, 3, 18, 14, 38, 42), 'report_count':
1042L, 'topic': u'scheduler', 'host': u'fedora5', 'disabled': False,
'deleted_at': None, 'disabled_reason': None, 'id': 3L}] is part of the
ServiceGroup, is up service_is_up /opt/stack/nova/nova/
servicegroup/api.py:94
2014-03-18 15:38:54.554 DEBUG nova.servicegroup.drivers.db
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] DB_Driver.is_up
last_heartbeat = 2014-03-18 14:38:42 elapsed = 12.554713 is_up
/opt/stack/nova/nova/servicegroup/drivers/db.py:71
nova-scheduler   fedora5  internal
enabled:-)   2014-03-18 14:38:42
2014-03-18 15:38:54.555 DEBUG nova.servicegroup.api
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] Check if the given
member [{'binary': u'nova-consoleauth', 'availability_zone': 'internal',
'deleted': 0L, 'created_at': datetime.datetime(2014, 3, 18, 11, 45, 6),
'updated_at': datetime.datetime(2014, 3, 18, 14, 38, 47), 'report_count':
1042L, 'topic': u'consoleauth', 'host': u'fedora5', 'disabled': False,
'deleted_at': None, 'disabled_reason': None, 'id': 4L}] is part of the
ServiceGroup, is up service_is_up /opt/stack/nova/nova/
servicegroup/api.py:94
2014-03-18 15:38:54.555 DEBUG nova.servicegroup.drivers.db
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] DB_Driver.is_up
last_heartbeat = 2014-03-18 14:38:47 elapsed = 7.555671 is_up
/opt/stack/nova/nova/servicegroup/drivers/db.py:71
nova-consoleauth fedora5  internal
enabled:-)   2014-03-18 14:38:47
2014-03-18 15:38:54.556 DEBUG nova.servicegroup.api
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] Check if the given
member [{'binary': u'nova-compute', 'availability_zone': 'nova', 'deleted':
0L, 'created_at': datetime.datetime(2014, 3, 18, 11, 45, 7), 'updated_at':
datetime.datetime(2014, 3, 18, 14, 38, 47), 'report_count': 1041L, 'topic':
u'compute', 'host': u'fedora5', 'disabled': False, 'deleted_at': None,
'disabled_reason': None, 'id': 5L}] is part of the ServiceGroup, is up
service_is_up /opt/stack/nova/nova/servicegroup/api.py:94
2014-03-18 15:38:54.556 DEBUG nova.servicegroup.drivers.db
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] DB_Driver.is_up
last_heartbeat = 2014-03-18 14:38:47 elapsed = 7.556588 is_up
/opt/stack/nova/nova/servicegroup/drivers/db.py:71
nova-compute fedora5  nova
enabled:-)   2014-03-18 14:38:47
2014-03-18 15:38:54.557 DEBUG nova.servicegroup.api
[req-84f2b3c1-8020-4f89-bd59-b9cb76126138 None None] Check if the given
member [{'binary': u'nova-compute', 'availability_zone': 'nova', 'deleted':
0L, 'created_at': datetime.datetime(2014, 3, 18, 11, 45, 8), 'updated_at':
datetime.datetime(2014, 3, 18, 14, 38, 48), 'report_count': 1041L, 'topic':
u'compute', 'host': u'fedora6', 'disabled': False, 'deleted_at': None,
'disabled_reason': None, 'id': 6L}] is part of the ServiceGroup, is up
service_is_up /opt/stack/nova/nova/servicegroup/api.py:94
2014-03-18 15:38:54.557 DEBUG nova.servicegroup.drivers.db

Re: [openstack-dev] Updating libvirt in gate jobs

2014-03-18 Thread Sean Dague
On 03/18/2014 10:11 AM, Daniel P. Berrange wrote:
 On Tue, Mar 18, 2014 at 07:50:15AM -0400, Davanum Srinivas wrote:
 Hi Team,

 We have 2 choices

 1) Upgrade to libvirt 0.9.8+ (See [1] for details)
 2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)

 For #1, we received a patched deb from @SergeHallyn/@JamesPage and ran
 tests on it in review https://review.openstack.org/#/c/79816/
 For #2, @SergeHallyn/@JamesPage have updated UCA
 (precise-proposed/icehouse) repo and we ran tests on it in review
 https://review.openstack.org/#/c/74889/

 For IceHouse, my recommendation is to request Ubuntu folks to push the
 patched 0.9.8+ version we validated to public repos, then we can can
 install/run gate jobs with that version. This is probably the smallest
 risk of the 2 choices.
 
 If we've re-run the tests in that review enough times to be confident
 we've had a chance of exercising the race conditions, then using the
 patched 0.9.8 seems like a no-brainer. We know the current version in
 ubuntu repos is broken for us, so the sooner we address that the better.
 
 As soon as Juno begins, we can switch 1.2.2+ on UCA and request Ubuntu
 folks to push the verified version where we can use it.
 
 This basically re-raises the question of /what/ we should be testing in
 the gate, which was discussed on this list a few weeks ago, and I'm not
 clear that there was a definite decision in that thread
 
   http://lists.openstack.org/pipermail/openstack-dev/2014-February/027734.html
 
 Testing the lowest vs highest is targetting two different scenarios
 
   - Testing the lowest version demonstrates that OpenStack has not
 broken its own code by introducing use of a new feature.
 
   - Testing the highest version demonstrates that OpenStack has not
 been broken by 3rd party code introducing a regression.
 
 I think it is in scope for openstack to be targetting both of these
 scenarios. For anything in-between though, it is upto the downstream
 vendors to test their precise combination of versions. Currently though
 our testing policy for non-python bits is whatever version ubuntu ships,
 which may be neither the lowest or highest versions, just some arbitrary
 version they wish to support. So this discussion is currently more of a
 'what ubuntu version should we test on' kind of decision

I think testing 2 versions of libvirt in the gate is adding a matrix
dimension that we currently can't really support. We're just going to
have to pick one per release and be fine with it (at least for icehouse).

If people want other versions tested, please come in with 3rd party ci
on it.

We can revisit the big test matrix at summit about the combinations
we're going to actually validate, because with the various limitations
we've got (concurrency limits, quota limits, upstream package limits,
kinds of tests we want to run) we're going to have to make a bunch of
compromises. Testing something new is going to require throwing existing
stuff out of the test path.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack/GSoC

2014-03-18 Thread Davanum Srinivas
Dear Students,

Student application deadline is on Friday, March 21 [1]

Once you finish the application process on the Google GSoC site.
Please reply back to this thread to confirm that all the materials are
ready to review.

thanks,
dims

[1] http://www.google-melange.com/gsoc/events/google/gsoc2014

-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-18 Thread Neal, Phil

 -Original Message-
 From: Tim Bell [mailto:tim.b...@cern.ch]
 Sent: Monday, March 17, 2014 2:04 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer
 resource_list CLI command
 
 
 At CERN, we've had similar issues when enabling telemetry. Our resource-list
 times out after 10 minutes when the proxies for HA assume there is no
 answer coming back. Keystone instances per cell have helped the situation a
 little so we can collect the data but there was a significant increase in 
 load on
 the API endpoints.
 
 I feel that some reference for production scale validation would be beneficial
 as part of TC approval to leave incubation in case there are issues such as 
 this
 to be addressed.
 
 Tim
 
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 17 March 2014 20:25
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer
 resource_list CLI command
 
 ...
 
  Yep. At ATT, we had to disable calls to GET /resources without any filters
 on it. The call would return hundreds of thousands of
  records, all being JSON-ified at the Ceilometer API endpoint, and the result
 would take minutes to return. There was no default limit
  on the query, which meant every single records in the database was
 returned, and on even a semi-busy system, that meant
  horrendous performance.
 
  Besides the problem that the SQLAlchemy driver doesn't yet support
 pagination [1], the main problem with the get_resources() call is
  the underlying databases schema for the Sample model is wacky, and
 forces the use of a dependent subquery in the WHERE clause
  [2] which completely kills performance of the query to get resources.
 
  [1]
 
 https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/
 impl_sqlalchemy.py#L436
  [2]
 
 https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/
 impl_sqlalchemy.py#L503
 
   The cli tests are supposed to be quick read-only sanity checks of the
   cli functionality and really shouldn't ever be on the list of slowest
   tests for a gate run.
 
  Oh, the test is readonly all-right. ;) It's just that it's reading hundreds 
  of
 thousands of records.
 
I think there was possibly a performance regression recently in
   ceilometer because from I can tell this test used to normally take ~60 
   sec.
   (which honestly is probably too slow for a cli test too) but it is
   currently much slower than that.
  
   From logstash it seems there are still some cases when the resource
   list takes as long to execute as it used to, but the majority of runs 
   take a
 long time:
   http://goo.gl/smJPB9
  
   In the short term I've pushed out a patch that will remove this test
   from gate
   runs: https://review.openstack.org/#/c/81036 But, I thought it would
   be good to bring this up on the ML to try and figure out what changed
   or why this is so slow.
 
  I agree with removing the test from the gate in the short term. Medium to
 long term, the root causes of the problem (that GET
  /resources has no support for pagination on the query, there is no default
 for limiting results based on a since timestamp, and that
  the underlying database schema is non-optimal) should be addressed.

Gordon has introduced a blueprint 
https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql with some fixes 
for individual queries but +1 to the point of looking at re-architecting the 
schema as an approach to fixing performance. We've also seen some gains here at 
HP using batch writes as well but have temporarily tabled that work in favor of 
getting a better-performing schema in place.
- Phil

 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-18 Thread Russell Bryant
On 03/18/2014 01:13 AM, Adrian Otto wrote:
 Solum Cores,
 
 I propose the following changes to the Solum core reviewer team:
 
 +gokrokve
 +julienvey
 +devdatta-kulkarni
 -kgriffs (inactivity)
 -russelb (inactivity)
 
 Please reply with your +1 votes to proceed with this change, or any remarks 
 to the contrary.

Happy to be removed.  I haven't been active in the last few months.

Best wishes!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-18 Thread Kurt Griffiths
+1

On 3/18/14, 12:13 AM, Adrian Otto adrian.o...@rackspace.com wrote:

Solum Cores,

I propose the following changes to the Solum core reviewer team:

+gokrokve
+julienvey
+devdatta-kulkarni
-kgriffs (inactivity)
-russelb (inactivity)

Please reply with your +1 votes to proceed with this change, or any
remarks to the contrary.

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Yuriy Taraday
On Mon, Mar 17, 2014 at 1:01 PM, IWAMOTO Toshihiro iwam...@valinux.co.jpwrote:

 I've added a couple of security-related comments (pickle decoding and
 token leak) on the etherpad.
 Please check.


Hello. Thanks for your input.

- We can avoid pickle using xmlrpclib.
- Token won't leak because we have direct pipe to parent process.

I'm in process of implementing it now so thanks for early notice.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Can I use a new plugin based on Ml2Plugin instead of Ml2Plugin as core_plugin

2014-03-18 Thread Vinay Bannai
Can't access the BP. Says it is private.


On Mon, Mar 17, 2014 at 7:03 PM, Nader Lahouti nader.laho...@gmail.comwrote:

 Sure. I filed new BP that address this issue:

 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions

 Thanks,
 Nader.



 On Mon, Mar 17, 2014 at 3:26 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Mon, Mar 17, 2014 at 4:53 PM, Nader Lahouti 
 nader.laho...@gmail.comwrote:

 Thanks Kyle for the reply.
 I added the code in the Ml2Plugin to include extensions in mechanism
 driver, if they exist.
 Hopefully I can commit it as part of this BP:

 https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support

 Great Nader! I think it makes more sense to have a new BP for this work,
 as it's not tied
 directly to the DFA work. Can you file one? Also, this will not land
 until Juno as we're in
 the RC for Icehouse now.

 Thanks,
 Kyle

  Thanks,
 Nader.



 On Mon, Mar 17, 2014 at 6:31 AM, Kyle Mestery mest...@noironetworks.com
  wrote:

 On Thu, Mar 13, 2014 at 12:07 PM, Nader Lahouti 
 nader.laho...@gmail.com wrote:

 -- edited the subject

 I'm resending this question.
 The issue is described in email thread and. In brief, I need to add
 load new extensions and it seems the mechanism driver does not support
 that. In order to do that I was thinking to have a new ml2 plugin base on
 existing Ml2Plugin and add my changes there and have it as core_plugin.
 Please read the email thread and glad to have your suggestion.

 Nader, as has been pointed out in the prior thread, it would be best
 to not write a
 new core plugin copied from ML2. A much better approach would be to
 work to
 make the extension loading function in the existing ML2 plugin, as this
 will
 benefit all users of ML2.

 Thanks,
 Kyle



  On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti 
 nader.laho...@gmail.com wrote:

 1) Does it mean an interim solution is to have our own plugin (and
 have all the changes in it) and declare it as core_plugin instead of
 Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s) is
 not showing up in the result, for instance when create_network is called
 [*result = super(Ml2Plugin, self).create_network(context, network)]*,
 and as a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug
 1201957 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 *998 -return self._make_network_dict(network)*

 *998 +return self._make_network_dict(network,
 process_extensions=False)*

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']

 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura 
 kuk...@noironetworks.com wrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no
 single MD can really control what set of extensions are active. Drivers
 need to be able to load private extensions that only pertain to that
 driver, but we also need to be able to share common extensions across
 subsets of drivers. Furthermore, the semantics of the extensions need 
 to be
 correct in the face of multiple co-existing drivers, some of which know
 about the extension, and some of which don't. Getting this properly 
 defined
 and implemented seems like a good goal for juno.

 -Bob



  Any though ?
 Édouard.

  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.comwrote:

 Hi,

 I think it is better to continue the discussion here. It is a good
 log :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it 

[openstack-dev] Hyper-V Meeting

2014-03-18 Thread Peter Pouliot
Hi All,

We are currently working through some issues within the CI today and therefore 
will need to cancel the Hyper-V  meeting for today.

We will resume again next week.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting 3/18 - agenda

2014-03-18 Thread Sylvain Bauza
Thanks to the attendees.

Below are the minutes of the meeting :
(16:55:45) openstack: Minutes:
http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-03-18-15.00.html
(16:55:46) openstack: Minutes (text):
http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-03-18-15.00.txt
(16:55:48) openstack: Log:
http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-03-18-15.00.log.html

Thanks,
-Sylvain


2014-03-17 15:45 GMT+01:00 Dugger, Donald D donald.d.dug...@intel.com:

  All-



 Just to be clear, Sylvain has agreed to host the meeting this week so it
 will proceed as scheduled.



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
 *Sent:* Monday, March 17, 2014 12:11 AM
 *To:* OpenStack Development Mailing List, (not for usage questions)
 *Subject:* Re: [openstack-dev] [gantt] scheduler sub-group meeting 3/18 -
 Cancel



 I can chair this one, no worries.

 I have the below topics in mind :
 - no-db scheduler blueprint
 - scheduler forklift efforts
 - open discussion

 Any other subjects to discuss ?

 -Sylvain

 Le 17 mars 2014 00:55, Dugger, Donald D donald.d.dug...@intel.com a
 écrit :



 I can't make the meeting this week so, unless someone else wants to
 volunteer to run the meeting, let's cancel this one.



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Can I use a new plugin based on Ml2Plugin instead of Ml2Plugin as core_plugin

2014-03-18 Thread Nader Lahouti
Vinay,

It shows it is public and everyone can see the info.

Thanks,
Nader.


On Tue, Mar 18, 2014 at 8:39 AM, Vinay Bannai vban...@gmail.com wrote:

 Can't access the BP. Says it is private.


 On Mon, Mar 17, 2014 at 7:03 PM, Nader Lahouti nader.laho...@gmail.comwrote:

 Sure. I filed new BP that address this issue:

 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions

 Thanks,
 Nader.



 On Mon, Mar 17, 2014 at 3:26 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Mon, Mar 17, 2014 at 4:53 PM, Nader Lahouti 
 nader.laho...@gmail.comwrote:

 Thanks Kyle for the reply.
 I added the code in the Ml2Plugin to include extensions in mechanism
 driver, if they exist.
 Hopefully I can commit it as part of this BP:

 https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support

 Great Nader! I think it makes more sense to have a new BP for this
 work, as it's not tied
 directly to the DFA work. Can you file one? Also, this will not land
 until Juno as we're in
 the RC for Icehouse now.

 Thanks,
 Kyle

  Thanks,
 Nader.



 On Mon, Mar 17, 2014 at 6:31 AM, Kyle Mestery 
 mest...@noironetworks.com wrote:

 On Thu, Mar 13, 2014 at 12:07 PM, Nader Lahouti 
 nader.laho...@gmail.com wrote:

 -- edited the subject

 I'm resending this question.
 The issue is described in email thread and. In brief, I need to add
 load new extensions and it seems the mechanism driver does not support
 that. In order to do that I was thinking to have a new ml2 plugin base on
 existing Ml2Plugin and add my changes there and have it as core_plugin.
 Please read the email thread and glad to have your suggestion.

 Nader, as has been pointed out in the prior thread, it would be best
 to not write a
 new core plugin copied from ML2. A much better approach would be to
 work to
 make the extension loading function in the existing ML2 plugin, as
 this will
 benefit all users of ML2.

 Thanks,
 Kyle



  On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti 
 nader.laho...@gmail.com wrote:

 1) Does it mean an interim solution is to have our own plugin (and
 have all the changes in it) and declare it as core_plugin instead of
 Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s)
 is not showing up in the result, for instance when create_network is 
 called
 [*result = super(Ml2Plugin, self).create_network(context, network)]*,
 and as a result they cannot be used in the mechanism drivers when 
 needed.

 Looks like the process_extensions is disabled when fix for Bug
 1201957 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 *998 -return self._make_network_dict(network)*

 *998 +return self._make_network_dict(network,
 process_extensions=False)*

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']

 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura 
 kuk...@noironetworks.com wrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a
 mechanism driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should
 only implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no
 single MD can really control what set of extensions are active. Drivers
 need to be able to load private extensions that only pertain to that
 driver, but we also need to be able to share common extensions across
 subsets of drivers. Furthermore, the semantics of the extensions need 
 to be
 correct in the face of multiple co-existing drivers, some of which know
 about the extension, and some of which don't. Getting this properly 
 defined
 and implemented seems like a good goal for juno.

 -Bob



  Any though ?
 Édouard.

  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki 
 amot...@gmail.comwrote:

 Hi,

 I think it is better to continue the discussion here. It is a good
 log :-)

 Eugine and I talked the related topic to 

Re: [openstack-dev] [neutron] Globally-unique VM MAC address to do vendor-backed DHCP

2014-03-18 Thread Mark McClain

On Mar 18, 2014, at 7:40 AM, Roman Verchikov 
rverchi...@mirantis.commailto:rverchi...@mirantis.com wrote:

Hi stakers,

We’re trying to replace dnsmasq-supplied DHCP for tenant VMs with a vendor’s 
baremetal DHCP server. In order to pass DHCP request to a vendor’s server and 
send DHCP response back to VM we decided to add another OVS bridge (we called 
it br-dhcp), connected to integration bridge (br-int), which will have OVS 
rules connecting VM’s MAC address with br-dhcp port. In this scenario DHCP 
response will only find it’s way back to a VM if VM has globally-unique MAC 
address.

My questions are:

  *   is having code which generates globally-unique MACs for VMs acceptable by 
the community at all?

This question tends to pop up from time to time and there are valid deployment 
and usage scenarios where you would want to assign the same MAC to multiple 
ports.


  *   is there a better solution to the problem (we also tried using dnsmasq as 
a DHCP relay there)?

That answer really depends on a number of factors.
 - Are the IP allocations being handled inside or outside of Neutron?
 - Do you allow different networks to have overlapping IP ranges?
 -

If it is outside of the OpenStack deployment then your code can use flow mods 
with you br-dhcp. If Neutron is managing the allocations or you allow 
overlapping IPs, you probably want to consider implementing a driver for the 
DHCP server.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Sean Dague
On 03/18/2014 12:09 PM, Chmouel Boudjnah wrote:
 
 On Tue, Mar 18, 2014 at 2:09 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 We've not required UCA for any other project to pass the gate.
 
 
 
 Is it that bad to have UCA in default devstack, as far as I know UCA is
 the official way to do OpenStack on ubuntu, right?

Currently we can't use it because libvirt in UCA remains too buggy to
run under the gate. If we had it turned on we'd see an astronomical
failure rate.

That is hopefully getting fixed, thanks to a lot of leg work by dims, as
it's required a lot of chasing.

However, I still believe UCA remains problematic, because our
experiences to date are basically that the entrance criteria for content
in UCA is clearly less than the base distro. And we are very likely to
be broken by changes put into it, as seen by the inability to run our
tests on top of it.

So I'm still -1 at the point in making UCA our default run environment
until it's provably functional for a period of time. Because working
around upstream distro breaks is no fun.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-18 Thread Brian Haley
On 03/17/2014 03:46 PM, Salvatore Orlando wrote:
 It is a common practice to have both an operational and an administrative 
 status.
 I agree ACTIVE as a term might result confusing. Even in the case of a port, 
 it
 is not really clear whether it means READY or LINK UP.
 Terminology-wise I would suggest READY rather than DEPLOYED, as it is a 
 term
 which makes sense for all resources, whereas the latter is probably a bit more
 suitable for high layer services.

Just some thoughts on this before you go and change the way it works :)

We've played with the admin state setting enough to think that more than two
states - True and False, could be useful.  For example, having an UP and
accepting new routers, UP but NOT accepting new routers, and DOWN seems to
be something useful for operators.  Whether those values are set via one flag or
two doesn't matter - perhaps one for UP/DOWN, the other to give the scheduler
hints is more useful?

That would allow an admin to say, set a limit on the number of resources
(routers/networks) on a network node, and take it out of rotation when the limit
is hit.

 In my opinion [2] putting a resource administratively down mean the user is
 deliberately deciding to disable that resource, and this goes beyond simply
 disabling its configuration, as mentioned in an earlier post. For instance,
 when a port is put administratively down, I'd expect it to not forward traffic
 anymore; similarly for a VIP.
 Hence, the reaction to putting a resource administratively down should that 
 its
 operational status goes down as well, and therefore there is no need for an
 explicit operational status ADMIN DOWN.
 This is, from what I can gather, what already happens with ports.
 The bug [1] is, in a way, an example of the above situation, since no action 
 is
 taken upon an object , in this case a network, being put administratively 
 down.
 
 However, since this is that time of the release cycle when we can use the
 mailing list to throw random ideas... what about doing an API change were we
 decide to put the administrative status on its way to deprecation? While it's 
 a
 common practice in network engineering to have an admin status, do we have a
 compelling use case for Neutron?
 I'm asking because 'admin_state_up' is probably the only attribute I've never
 updated on any resource since when I started using Quantum!

I've (unfortunately?) used it many times, for example during a High-Availability
event you might want to un-manage all the routers on a network node and have
them re-scheduled elsewhere.

Thanks,

-Brian

 Also, other IaaS network APIs that I am aware of ([3],[4],[5]) do not have 
 such
 concept; with the exception of [3] for the virtual router, if I'm not wrong.
 
 Thanks in advance for reading through my ramblings!
 Salvatore
 
 [1] https://bugs.launchpad.net/neutron/+bug/1237807
 [2] Please bear in mind that my opinion is wrong in most cases, or at least is
 different from that of the majority!
 [3] https://cloudstack.apache.org/docs/api/apidocs-4.2/TOC_Root_Admin.html
 [4] http://archives.opennebula.org/documentation:archives:rel2.0:api
 [5] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API-ItemTypes.html
 
 
 
 On 17 March 2014 17:16, Eugene Nikanorov enikano...@mirantis.com
 mailto:enikano...@mirantis.com wrote:
 
  Seems awkward to me, if an IPSec connection has a status of ACTIVE, but 
 an
 admin state of ADMIN DOWN.
 Right, you see, that's the problem. Constant name 'ACTIVE' makes you 
 expect
 that IPSec connection should work, while it is a deployment status.
 
  OK, so the change is merely change ACTIVE into DEPLOYED instead?
 We can't just rename the ACTIVE to DEPLOYED, and may be the latter is not
 the best name, but yes, that's the intent.
 
 Thanks,
 Eugene.
  
 
 
 On Mon, Mar 17, 2014 at 7:31 PM, Kyle Mestery mest...@noironetworks.com
 mailto:mest...@noironetworks.com wrote:
 
 On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov
 enikano...@mirantis.com mailto:enikano...@mirantis.com wrote:
 
 Hi Kyle,
 
 
 
 
 
 
 It's a typical use case for network devices to have both admin
 and operational
 state. In the case of having admin_state=DOWN and
 operational_state=ACTIVE,
 this just means the port/link is active but has been 
 configured
 down. Isn't this
 the same for LBaaS here? Even reading the bug, the user has
 clearly configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's
 due to this
 configuration that the pool remains admin_state=DOWN.
 
 Am I missing something here?
 
 No, you're not. The user sees 'ACTIVE' status and think it
 contradicts 'DOWN' admin_state. 
 It's naming (UX problem), in my opinion.
 
 OK, so the 

[openstack-dev] Offset support in REST API pagination

2014-03-18 Thread Steven Kaufer


First, here is some background on this topic:
http://www.gossamer-threads.com/lists/openstack/dev/2777

Does anyone have any insight as to why offset is not supported in the REST
API calls that support pagination?   I realize that there are tradeoffs
when using a offset (vs. marker) but I believe that there is value in
supporting both.  For example, if you want to jump to the n-th page of data
without having to traverse all of the previous pages.

Is there a reason why the APIs do not support either a marker or an offset
(obviously not both) on the API request?  It appears that sqlalchemy has
offset support.

Also, it seems that cinder at least looks for the offset parameter (but
ignores it).  Does this mean that it was supported at one time but later
the support was removed?
https://github.com/openstack/cinder/blob/master/cinder/api/v2/volumes.py#L214

Thanks for the information.

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stack breakpoint

2014-03-18 Thread Zane Bitter

On 17/03/14 21:18, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 03/17/2014 07:03:25 PM:

  On 17/03/14 17:03, Ton Ngo wrote:

   - How to handle resources with timer, e.g. wait condition:
  pause/resume
   timer value
 
  Handle it by only allowing pauses before and after. In most cases I'm
  not sure what it would mean to pause _during_.

I'm not sure I follow this part.  If at some time a timer is started,
and the event(s) upon which it is waiting are delayed by hitting a
breakpoint and waiting for human interaction --- I think this is the
scenario that concerned Ton.  It seems to me the right answer is that
all downstream timers have to stop ticking between break and resume.


Perhaps this was too general. To be specific, there is exactly one 
resource with a timer* - a WaitCondition. A WaitCondition is usually 
configured to be dependent on the server that should trigger it. Nothing 
interesting happens while a WaitCondition is waiting, so there is no 
point allowing a break point in the middle. You would either set the 
breakpoint after the server has completed or before the WaitCondition 
starts (which amount to the same thing, assuming no other dependencies). 
You could, in theory, set a breakpoint after the WaitCondition complete, 
though the use case for that is less obvious. In any event, at no time 
is the stack paused _while_ the WaitCondition is running, and therefore 
no need to use anything but wallclock time to determine the timeout.


cheers,
Zane.

* Technically there is another: autoscaling groups during update with an 
UpdatePolicy specified... however these use a nested stack, and the 
solution here is to use this same feature within the nested stack to 
implement the delays rather than complicate things in the stack 
containing the autoscaling group resource.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-18 Thread Tim Hinrichs
Hi Prabhakar,

No IRC meeting this week.  Our IRC is every *other* week, and we had it last 
week.

Though there's been enough activity of late that maybe we should consider 
making it weekly.

I'll address the rest later.

Tim

- Original Message -
| From: prabhakar Kudva nandava...@hotmail.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Monday, March 17, 2014 7:45:53 PM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| Hi Tim,
|  
| Definitely would like to learn more about the Data Integration component
| and how best to contribute. Can't find it in the latest git, perhaps we can
| discuss during the meeting tomorrow. Where can I find some code and/or
| document?
|  
| I like the idea of using an 'evauate' wrapper.  And the idea of a builtin
| library of functions.  Having strict rules for builtins and an evaluate
| wrapper
| to do type checking (primitive or builtin) and so on reduces the silent
| errors or weird outcomes.
|  
| I also agree, raw python using methods like 'calleable' (or other ways
| of checking if the function can be called in python) may not always give
| definitive answers. Even the definition of callable says, that a True only
| says that the object 'appears' calleable.
|  
| So silent and weird problems are possible if we try to execute. On the other
| hand,
| if we are looking at module which contains only user defined functions or
| builtine
| , i.e., there  are strict rules to what exactly constitute allowed-functions,
| then there
| is a higher likelihood that it is correct.  But still a likelihood.
|  
| I like the 'evaluate' idea which could do the checking within
| to make sure the function is indeed calleable, and silent and weird side
| effects
| are filtered inside the 'evaluate'. In addition, it can do checking if it is
| a  primitive or some other valid format. Needs more thought on this.
|  
| Let's discuss implementation paths if possible at the meeting. Would like to
| carve out a small implementation goal either in the 'data integration' line
| or
| the discussion above.
|  
| Prabhakar
|  
|  
|  
| 
|  
|  Date: Mon, 17 Mar 2014 08:57:05 -0700
|  From: thinri...@vmware.com
|  To: openstack-dev@lists.openstack.org
|  Subject: Re: [openstack-dev] [Congress] Policy types
|  
|  Hi Prabhakar,
|  
|  One big piece we're missing in terms of code right now is the Data
|  Integration component.  The job of this component is to integrate data
|  sources available in the cloud so that tables like nova:virtual_machine,
|  neutron:owner, etc. reflect the information stored in Nova, Neutron, etc.
|  Rajdeep is making progress on that (he's got some code up on review that
|  we're iterating on), and Peter had worked on integrating AD a while back.
|  
|  Typically I've seen the Python functions (which I usually call 'builtins')
|  integrated into a Datalog system have explicit declarations (e.g. inputs,
|  outputs, etc.).  This is good if we need to do deeper analysis of the
|  policy (which is one of the reasons to use a policy language) and
|  typically requires information about how that builtin works.
|  
|  I dug through some old (Lisp) code to see how I've done this in the past.
|  
|  // (defbuiltin datalog-name lisp-function list of types of args list
|  of types of returns [internal])
|  (defbuiltin plus + (integer integer) integer)
|  (defbuiltin minus - (integer integer) integer)
|  (defbuiltin times * (integer integer) integer)
|  (defbuiltin div (lambda (x y) (floor (/ x y))) (integer integer) integer)
|  (defbuiltin lt numlessp (integer integer) nil)
|  (defbuiltin lte numleqp (integer integer) nil)
|  (defbuiltin gte numgeqp (integer integer) nil)
|  (defbuiltin gt numgreaterp (integer integer) nil)
|  
|  But maybe you're right in that we could do away with these explicit
|  declarations and just assume that everything that (i) is calleable, (ii)
|  not managed by the Data Integration component, and (iii) does not appear
|  in the head of a rule is a builtin.  My only worry is that I could imagine
|  silent and weird problems showing up, e.g. someone forgot to define a
|  table with rules and there happens to be a function in Python by that
|  name.  Or someone supplies the wrong number of arguments, and we just get
|  an error from Python, which we'd have no direct way to communicate to the
|  policy-writer, i.e. there's no compile-time argument-length checking.
|  
|  The other thing I've seen done is to have a single builtin 'evaluate' that
|  lets us call an arbitrary Python function, e.g.
|  
|  p(x, y) :- q(x), evaluate(mypyfunc(x), y)
|  
|  Then we wouldn't need to declare the functions.  Errors would still be
|  silent.  But it would be clear whether we were using a Python function or
|  not.
|  
|  Thoughts?
|  Tim
|  
|  
|  
|  
|  - Original Message -
|  | From: prabhakar Kudva nandava...@hotmail.com
|  | To: OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Nova][Heat] How to reliably detect VM failures?

2014-03-18 Thread Steven Dake

On 03/18/2014 07:54 AM, Qiming Teng wrote:

Hi, Folks,

   I have been trying to implement a HACluster resource type in Heat. I
haven't created a BluePrint for this because I am not sure everything
will work as expected.

   The basic idea is to extend the OS::Heat::ResourceGroup resource type
with inner resource types fixed to be OS::Nova::Server.  Properties for
this HACluster resource may include:

   - init_size: initial number of Server instances;
   - min_size: minimal number of Server instances;
   - sig_handler: a reference to a sub-class of SignalResponder;
   - zones: a list of strings representing the availability zones, which
   could be a names of the rack where the Server can be booted;
   - recovery_action: a list of supported failure recovery actions, such
   as 'restart', 'remote-restart', 'migrate';
   - fencing_options: a dict specifying what to do to shutdown the Server
   in a clean way so that data consistency in storage and network are
   reserved;
   - resource_ref: a dict for defining the Server instances to be
   created.

   Attributes of the HACluster may include:
   - refs: a list of resource IDs for the currently active Servers;
   - ips: a list of IP addresses for convenience.

   Note that the 'remote-restart' action above is today referred to as
'evacuate'.

   The most difficult issue here is to come up with a reliable VM failure
detection mechanism.  The service_group feature in Nova only concerns
about the OpenStack services themselves, not the VMs.  Considering that
in our customer's cloud environment, user provided images can be used,
we cannot assume some agents in the VMs to send heartbeat signals.

   I have checked the 'instance' table in Nova database, it seemed that
the 'update_at' column is only updated when VM state changed and
reported.  If the 'heartbeat' messages are coming in from many VMs very
frequently, there could be a DB query performance/scalability issue,
right?

   So, how can I detect VM failures reliably, so that I can notify Heat
to take the appropriate recovery action?

Qiming,

Check out

https://github.com/openstack/heat-templates/blob/master/cfn/F17/WordPress_Single_Instance_With_HA.template

You should be able to use the HARestarter resource and functionality to 
do healthchecking of a vm.


It would be cool if nova could grow a feature to actively look at the 
vm's state internally and determine if it was healthy (eg look at its 
memory and see if the scheduler is running, things like that) but this 
would require individual support from each hypervisor for such 
functionality.


Until that happens, healthchecking from within the vm seems like the 
only reasonable solution.


Regards
-steve


Regards,
   - Qiming

Research Scientist
IBM Research - China
tengqim at cn dot ibm dot com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Pecan Evaluation for Marconi

2014-03-18 Thread Balaji Iyer
I work for Rackspace and Im fairly new to Openstack Ecosystem. Recently, I came 
across an opportunity to evaluate Pecan for Marconi and produce a comprehensive 
report. I have not worked with Pecan or Falcon prior to this evaluation, and 
have no vested interest in these two frameworks.

Evaluating frameworks is not always easy, but I have strived to cover as many 
details as applicable.  I have evaluated Pecan and Falcon only on how it fits 
Marconi and this should not be treated as a general evaluation for all 
products. It is always recommended to evaluate frameworks based on your 
product's requirements and its workload.

Benchmarking is not always easy, hence I have spent a good amount of time 
benchmarking these two frameworks using different tools and under different 
network and load conditions with Marconi. Some of the experiences I have 
mentioned in the report are very subjective and it narrates mine - you may have 
had a different experience with these frameworks, which is totally acceptable.

Full evaluation report is available here - 
https://wiki.openstack.org/wiki/Marconi/pecan-evaluation

Thought of sharing this with the community in the hope that someone may find 
this useful.

Thanks,
Balaji Iyer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Chmouel Boudjnah
On Tue, Mar 18, 2014 at 5:21 PM, Sean Dague s...@dague.net wrote:

 So I'm still -1 at the point in making UCA our default run environment
 until it's provably functional for a period of time. Because working
 around upstream distro breaks is no fun.



I agree, if UCA is not very stable ATM, this os going to cause us more
pain, but what would be the plan of action? a non-voting gate for
ceilometer as a start ? (if that's possible).

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Offset support in REST API pagination

2014-03-18 Thread Jay Pipes
On Tue, 2014-03-18 at 11:31 -0500, Steven Kaufer wrote:
 First, here is some background on this topic:
  http://www.gossamer-threads.com/lists/openstack/dev/2777
 
 Does anyone have any insight as to why offset is not supported in the
 REST API calls that support pagination?   I realize that there are
 tradeoffs when using a offset (vs. marker) but I believe that there is
 value in supporting both.  For example, if you want to jump to the
 n-th page of data without having to traverse all of the previous
 pages.
 
 Is there a reason why the APIs do not support either a marker or an
 offset (obviously not both) on the API request?  It appears that
 sqlalchemy has offset support.
 
 Also, it seems that cinder at least looks for the offset parameter
 (but ignores it).  Does this mean that it was supported at one time
 but later the support was removed?
  https://github.com/openstack/cinder/blob/master/cinder/api/v2/volumes.py#L214
 
 Thanks for the information.

Hail to thee, stranger! Thou hast apparently not passed into the cave of
marker/offset before!

I humbly direct you to buried mailing list treasures which shall
enlighten you!

This lengthy thread shall show you how yours truly was defeated in
written combat by the great Justin Santa Barbara, who doth exposed the
perils of the offset:

http://www.gossamer-threads.com/lists/openstack/dev/2803

A most recent incantation of the marker/offset wars is giveth here:

http://lists.openstack.org/pipermail/openstack-dev/2013-November/018861.html

Best of days to you,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-18 Thread Tim Bell

If UCA is required, what would be the upgrade path for a currently running 
OpenStack Havana site to Icehouse with this requirement ?

Would it be an online upgrade (i.e. what order to upgrade the different 
components in order to keep things running at all times) ?

Tim

From: Chmouel Boudjnah [mailto:chmo...@enovance.com]
Sent: 18 March 2014 17:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer 
tempest testing in gate


On Tue, Mar 18, 2014 at 5:21 PM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
So I'm still -1 at the point in making UCA our default run environment
until it's provably functional for a period of time. Because working
around upstream distro breaks is no fun.

I agree, if UCA is not very stable ATM, this os going to cause us more pain, 
but what would be the plan of action? a non-voting gate for ceilometer as a 
start ? (if that's possible).

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-18 Thread Samuel Bercovici
Discussing some radical concepts...

I also agree that there should be different attribute to reflect the 
administrator state, operation state and the provisioning state.
This is already reflected in 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?usp=sharing
 as three different  properties admin_state_up, Provisioning Status and 
Operation Status.

The problem with the provisioning status is that it is not reentrant
Many of the APIs are a-sync so if we do a few a-sync calls, it is not clear 
which of those call's status we see in the status property.
A better API might be that when doing an a-sync call, the call retunes a 
token and  the status of the token reflects how the a-sync call was 
completed.

Regards,
-sam.


-Original Message-
From: Itsuro ODA [mailto:o...@valinux.co.jp] 
Sent: Tuesday, March 18, 2014 1:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs 
operational status

Hi,

With my understanding, it should be:

* 'status' is a read only attribute which shows to users whether 
  the service is available or not.
  So for example, VIP status is ACTIVE but the loadblancing service
  is not available is not allowed.
  (Actually our customers want to fix this strongly.)

* 'admin_state_up' is an attribute for an administrator to set
  whether the service is available or not.
  As a result 'status' of the resource and the associated resources
  become ACTIVE or DOWN. 
  (If it does not work so, it is a problem of the implementation.)

Thanks
-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-18 Thread Zane Bitter
(I added a couple of tags to the subject; hope this doesn't screw up 
anyone's threading.)


On 09/03/14 16:26, Joshua Harlow wrote:

I'd be very interested in knowing the resource controls u plan to add.
Memory, CPU...

I'm still trying to figure out where something like
https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
would be beneficial, why not just spend effort sand boxing lua,
python... Instead of spending effort on creating a new language and then
having to sandbox it as well... Especially if u picked languages that
are made to be sandboxed from the start (not python)...


-1 to using a full-blown programming language like Python over a DSL, but...


Who is going to train people on muranoPL, write language books and
tutorials when the same amount of work has already been done for 10+
years for other languages

I fail to see where muranoPL is a DSL when it contains a full language
already with functions, objects, namespaces, conditionals... (what is
the domain of it?), maybe I'm just confused though (quite possible, haha).


...I'm inclined to agree with this. Whenever you find yourself 
implementing a Turing-complete Object-Oriented DSL... well, you'd at 
least want to stop and think very carefully about whether you might have 
taken a wrong turn somewhere.



Does this not seem awkward to anyone else??


It does seem really awkward to me (and not just because of all the 
$signs), because it's duplicating basically all of the functionality of 
Heat. e.g. in MuranoPL you have:


Properties:
  name:
Contract: $.string().notNull()

whereas in HOT this would be:

parameters:
  name:
type: string
constraints:
  - length: {min: 1}

In MuranoPL you reference it using $this.name, vs. HOT using 
{get_param: name}.


Note that HOT (a) already exists in OpenStack, and (b) is being 
developed in conjunction with TOSCA folks to ensure easy translation 
to/from TOSCA Simple Profile YAML rendering.


Looking at e.g. [1], more  or less everything in here can be done 
already inside a Heat template, using get_file and str_replace.


It sounds like this is a DSL in which you write everything imperatively, 
then it gets converted behind the scenes into a declarative model in a 
completely different language (but not using any of the advanced 
features of that language) and passed to Heat, which turns it back into 
a workflow to execute. That seems bizarre to me. Surely Murano should be 
focusing on filling the gaps in Heat, rather than reimplementing it in a 
different paradigm?


What I'm imagining here is something along the lines of:
- Heat implements hooks to customise its workflows, as proposed in [2], [3].
- Deployments are defined declaratively using HOT syntax.
- Workflows - to wrap the deployment operations, to customise the 
deployment and to perform lifecycle operations like backups - are 
defined using a Mistral DSL (I assume this exists already? I haven't 
looked into it).
- Murano provides a way to bundle the various workflow definitions, HOT 
models, and other files into an application package.


Can anybody enlighten me as to what features would be missing from this 
that would warrant creating a new programming language?


thanks,
Zane.

[1] 
https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026329.html
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030228.html



Sent from my really tiny device...

On Mar 8, 2014, at 10:44 PM, Stan Lagun sla...@mirantis.com
mailto:sla...@mirantis.com wrote:


First of all MuranoPL is not a host but hosted language. It never
intended to replace Python and if Python can do the job it is probably
better than MuranoPL for that job.
The problem with Python is that you cannot have Python code as a part
of your DSL if you need to evaluate that DSL on server-side. Using
Python's eval() is not secure. And you don't have enough control on
what evaled code is allowed to do. MuranoPL on the contrary are fully
sandboxed. You have absolute control over what functions/methods/APIs
are exposed to DSL and DSL code can do nothing except for what it
allowed to do. Besides you typically do want your DSL to be
domain-specific so general-purpose language like Python can be suboptimal.

I don't say MuranoPL is good for all projects. It has many
Murano-specific things after all. In most cases you don't need all
those OOP features that MuranoPL has. But the code organization, how
it uses YAML, block structures and especially YAQL expressions can be
of a great value to many projects

For examples of MuranoPL classes you can browse
https://github.com/istalker2/MuranoDsl/tree/master/meta folder. This
is my private repository that I was using to develop PoC for MuranoPL
engine. We are on the way to create production-quality implementation
with unit-tests etc. in 

Re: [openstack-dev] MuranoPL questions?

2014-03-18 Thread Zane Bitter

On 18/03/14 08:01, Ruslan Kamaldinov wrote:

Joshua, Clint,

The only platform I'm aware about which fully supports true isolation and which
has been used in production for this purpose is Java VM. I know people who
developed systems for online programming competitions and really smart kids
tried to break it without any luck :)

Since we're speaking about Heat, Mistral and Murano DSLs and all of them need an
execution engine. Do you think that JVM could become a host for that engine?


-2. Deploying OpenStack is hard enough already.


JVM has a lot of potential:
- it allows to fine-tune security manager to allow/disallow specific actions
- it can execute a lot of programming languages - Python, Ruby, JS, etc
- it has been used in production for this specific purpose for years

But it also introduces another layer of complexity:
- it's another component to deploy, configure and monitor
- it's non-python, which means it should be supported by infra
- we will need to run java service and potentially have some java code to
   accept and process user code


Thanks,
Ruslan

On Mon, Mar 17, 2014 at 10:40 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

So I guess this is similar to the other thread.

http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html

I know that the way YQL has provided it could be a good example; where the
core DSL (the select queries and such) are augmented by the addition and
usage of JS, for example
http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
(ignore that its XML, haha). Such usage already provides rate-limits and
execution-limits
(http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html) and
afaik if something like what YQL is doing then u don't need to recreate
simialr features in your DSL (and then u also don't need to teach people
about a new language and syntax and ...)

Just an idea (I believe lua offers similar controls/limits.., although its
not as popular of course as JS).

From: Stan Lagun sla...@mirantis.com

Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 3:59 AM

To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Joshua,

Completely agree with you. We wouldn't be writing another language if we
knew how any of existing languages can be used for this particular purpose.
If anyone suggest such language and show how it can be used to solve those
issues DSL was designed to solve we will consider dropping MuranoPL. np

Surely DSL hasn't stood the test of time. It just hasn't had a chance yet.
100% of successful programming languages were in such position once.

Anyway it is the best time to come with your suggestions. If you know how
exactly DSL can be replaced or improved we would like you to share


On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow harlo...@yahoo-inc.com
wrote:


I guess I might be a bit biased to programming; so maybe I'm not the
target audience.

I'm not exactly against DSL's, I just think that DSL's need to be really
really proven to become useful (in general this applies to any language that
'joe' comp-sci student can create). Its not that hard to just make one, but
the real hard part is making one that people actually like and use and
survives the test of time. That's why I think its just nicer to use
languages that have stood the test of time already (if we can), creating a
new DSL (muranoPL seems to be slightly more than a DSL imho) means creating
a new language that has not stood the test of time (in terms of lifetime,
battle tested, supported over years) so that's just the concern I have.

Of course we have to accept innovation and I hope that the DSL/s makes it
easier/simpler, I just tend to be a bit more pragmatic maybe in this area.

Here's hoping for the best! :-)

-Josh

From: Renat Akhmerov rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, March 10, 2014 at 8:36 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Although being a little bit verbose it makes a lot of sense to me.

@Joshua,

Even assuming Python could be sandboxed and whatever else that's needed to
be able to use it as DSL (for something like Mistral, Murano or Heat) is
done  why do you think Python would be a better alternative for people who
don't know neither these new DSLs nor Python itself. Especially, given the
fact that Python has A LOT of things that they'd never use. I know many
people who have been programming in Python for a while and they admit they
don't know all the nuances of Python and actually use 30-40% of all of its
capabilities. Even not in domain specific development. So narrowing a
feature set that a language 

Re: [openstack-dev] [QA][Tempest] Reminder: Bug Day - Wed, 19th

2014-03-18 Thread Mauro S M Rodrigues

Hey! I Just want to reminder everybody about the bug day tomorrow.

Thanks

On 03/12/2014 09:31 PM, Mauro S M Rodrigues wrote:

Hello everybody!

In the last QA meeting I stepped ahead and volunteered to organize 
another QA Bug Day.


This week wasn't a good one, so I thought to schedule it to the next 
Wednesday (March, 19th). If you think we need more time or something, 
please let me know.


== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the 
bug in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can c(see [2])


3rd - Dedicate some time to review the 55 In Progress bugs (see [3]) 
AND/OR be in touch with the current assignee in case the bug hadn't 
recent activity (see [4]) so we can put it back into triage steps.


And depending on how the things happen, I would suggest to not forget 
Grenade which is also part of the QA Program and extend that effort 
into it (see Grenade References with the same indexes of tempest's).


So that's pretty much it, I would like to hear any suggestion or 
opinion that you guys may have.



== Tempest references ==
[1] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated


== Grenade references ==
[1] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][baremetal] Status of nova baremetal and ironic

2014-03-18 Thread Devananda van der Veen
On Tue, Mar 18, 2014 at 12:22 AM, Zhongyue Luo zhongyue@intel.comwrote:

 Hi,

 If I were to implement a new BM driver then should I propose a BP to
 Ironic rather than Nova? We are currently writing a driver internally using
 nova-baremetal. My understanding is that nova-baremetal will only merge
 critical bug fixes and new features will merge to Ironic, correct? Thanks.



Hi Zhongyue,

Yes, please target new drivers towards Ironic. That is, IMNSHO, where all
further development around Bare Metal provisioning should be going.

For context, both AMD and HP have already submitted hardware-specific
drivers to Ironic. We are in feature-freeze mode right now, so new feature
code (like a new driver) won't be merged until Juno development opens.
You're welcome to submit the code early and folks might review it anyway if
they have time.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-18 Thread Matthew Treinish
On Tue, Mar 18, 2014 at 12:34:57PM +0100, Koderer, Marc wrote:
 On Tue, 18 Marc 2014 12:00:00 +0100
 Christopher Yeoh [mailto:cbky...@gmail.com] wrote:
  On Tue, 18 Mar 2014 10:39:19 +0100
  Koderer, Marc m.kode...@telekom.de wrote:
  
   I just recognized that we have very similar interface definitions in
   tempest/api_schema and etc/schema:
  
   https://github.com/openstack/tempest/tree/master/etc/schemas
   https://github.com/openstack/tempest/tree/master/tempest/api_schema
  
   Any objections if I move them to a single location? I'd prefer to use
   json as file format instead of .py. As final goal we should find a way
   how to merge them competently but I feel like this is something for
   the design summit ;)
  
  
  Heh we just moved them but I'm open to other suggestions - they are are
  specific to API testing though aren't they? Long term the idea is that
  they should be generated by Nova rather than tempest.  I think to prevent
  unintentional changes we'd probably cache a copy in tempest though rather
  than dynamically query them.

The idea was never to dynamically query them; there should always be a copy in
the tempest tree. Like you said to prevent unintentional changes which is the
same reason we don't auto-discover api versions. The idea for querying nova to
get the schemas was to enable a tool which could populate the schemas
automatically so that we didn't have to manually generate them individually. I'd
say, to a certain extent, that this new round of validation patches could use
the same kind of tool.

 
 Sry that I didn't recognized this review.
 Both definitions are coupled to API testing, yes.
 
  
  My feeling at the moment is that they should  .py files.
  Because I think there's cases where we will want to have some schema
  definitions based on others or share common parts and use bits of python
  code to achieve this. For example availability zone list and detailed
  listing  have a lot in common (detailed listing just has a more
  parameters). I think there'll be similar cases for v2 and v3 versions as
  well.  While we're still manually generating them and keeping them up to
  date I think it's worth sharing as much as we can.
 
 Ok understood. We just converted the negative testing
 definitions to json files due to review findings..

Well, when I left the review comment about it being a json file, I didn't think
about inheritance. Chris has a good point about reusing common bits and just
extending it. That wasn't how you proposed the negative test schemas would be
used which is why I suggested using a raw json file.

 It's just very confusing for new people if they see
 two separate folders with schema definitions.
 
 But unfortunately there isn't an easy way.

Am I missing something or are these schemas being added now just a subset of
what is being used for negative testing? Why can't we either add the extra
negative test info around the new test validation patches and get the double
benefit. Or have the negative test schemas just extend these new schemas being
added?

 
  
  I agree there's a lot of commonality and we should long term just have one
  schema definition. There's quite a bit to discuss (eg level of strictness
  is currently different) in this area and a summit session about it would
  be very useful.
  
 
 +1
 

I agree there is probably enough here to discuss during a summit session on
where schema validation fits into tempest. As a part of that discussing how to
store and manage schema definitions for both the negative test framework and
validation tests.


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?

2014-03-18 Thread Joshua Harlow
Awesome, great to see this, will try it out :-)

Is that in the recently released pbr (0.7.0?)

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 18, 2014 at 4:51 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?




On Tue, Mar 18, 2014 at 1:37 AM, Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com wrote:
On 18/03/14 07:39 +0530, Noorul Islam Kamal Malmiyoda wrote:
On Tue, Mar 18, 2014 at 4:59 AM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Doug Hellmann and Victror Stinner (+ oslo cores),

Solum currently depends on a py33 gate. We want to use oslo.messaging, but are 
worried that in the current state, we will be stuck without py33 support. We 
hope that by adding the Trollius code[1], and getting a new release number, 
that we can add the oslo.messaging library to our requirements and proceed with 
our async messaging plan.

I am seeking guidance from you on when the above might happen. If it's a short 
time, we may just wait for it. If it's a long time, we may need to relax our 
py33 gate to non-voting in order to prevent breakage of our Stackforge CI while 
we work with the oslo.messaging code. We are also considering doing an ugly 
workaround of creating a bunch of worker processes on the same messaging topic 
until we can clear this concern.

Thoughts?


I think we should not make python33 gate non-voting as we will miss
out issues related others. We can toggle the oslo.messaging related
tests to not run in python33.

Even if we disable our tests, we can't even add oslo.messaging to
our requirements.txt as it can't even be installed.

Actually, Julien recently added support to pbr for separate requirements files 
for python 3 (requirements-py3.txt and test-requirements-py3.txt). If the 
python 3 file exists, the default file is not used at all, so it is possible to 
list completely different requirements for python 2 and 3.

Doug



The only practical solution I can see is to make py33 non-voting until 
oslo.messaging
can handle py33.

-Angus



Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Offset support in REST API pagination

2014-03-18 Thread Steven Kaufer
Jay Pipes jaypi...@gmail.com wrote on 03/18/2014 12:02:50 PM:

 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org,
 Date: 03/18/2014 12:15 PM
 Subject: Re: [openstack-dev] Offset support in REST API pagination

 On Tue, 2014-03-18 at 11:31 -0500, Steven Kaufer wrote:
  First, here is some background on this topic:
   http://www.gossamer-threads.com/lists/openstack/dev/2777
 
  Does anyone have any insight as to why offset is not supported in the
  REST API calls that support pagination?   I realize that there are
  tradeoffs when using a offset (vs. marker) but I believe that there is
  value in supporting both.  For example, if you want to jump to the
  n-th page of data without having to traverse all of the previous
  pages.
 
  Is there a reason why the APIs do not support either a marker or an
  offset (obviously not both) on the API request?  It appears that
  sqlalchemy has offset support.
 
  Also, it seems that cinder at least looks for the offset parameter
  (but ignores it).  Does this mean that it was supported at one time
  but later the support was removed?
   https://github.com/openstack/cinder/blob/master/cinder/api/v2/
 volumes.py#L214
 
  Thanks for the information.

 Hail to thee, stranger! Thou hast apparently not passed into the cave of
 marker/offset before!

 I humbly direct you to buried mailing list treasures which shall
 enlighten you!

 This lengthy thread shall show you how yours truly was defeated in
 written combat by the great Justin Santa Barbara, who doth exposed the
 perils of the offset:

 http://www.gossamer-threads.com/lists/openstack/dev/2803

 A most recent incantation of the marker/offset wars is giveth here:


http://lists.openstack.org/pipermail/openstack-dev/2013-November/018861.html


 Best of days to you,
 -jay

Jay:

Thanks for the feedback and the history on this topic. I understand that
the limit/marker
approach is superior when simply traversing all of the pages. However,
consider the
following:

- User knows that there are 1000 items (VMs, volumes, images, really
doesn't matter)
- User knows that the item that they want is in roughly the middle of the
data set (assume
everything is sorted by display name)
- User cannot remember the exact name so a filter will not help and
changing the sort
direction will not help (since the item they want it is in the middle of
the dataset)
- User supplies an offset of 500 to jump into the middle of the data set
- User then uses the marker approach to traverse the pages from this point
to find the
item that they want

In this case the offset approach is not used to traverse pages so there are
no issues with
missing an item or seeing a duplicate.

Why couldn't the APIs support either marker or offset on a given request?
Also, to encourage
the use of marker instead of offset, the next/previous links on any request
with an offset
supplied should contain the appropriate marker key values -- this should
help discourage
simply increasing the offset when traversing the pages.

I realize that if only one solution had to be chosen, then limit/marker
would always win
this war. But why can't both be supported?

Thanks,

Steven Kaufer



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] OPENSTACK SERVICE ERROR

2014-03-18 Thread Stefano Maffulli
On 03/17/2014 11:14 AM, Ben Nemec wrote:
 First, please don't use all caps in your subject.  Second, please do use
 tags to indicate which projects your message relates to.  In this case,
 that appears to be nova.

third (actually, zero: this is the most important part of all)

Don't use the development list for support questions. If you have issues
running any piece of openstack don't ever post here. Use the General
mailing list, ask.openstack.org, the operators list.

Also, it's perfectly acceptable behavior *not* to *respond* to usage
requests here.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-18 Thread Nikhil Manchanda

Thomas Goirand writes:

 On 03/18/2014 06:12 PM, Thierry Carrez wrote:
 Thomas Goirand wrote:
[...]
 Trove turned out to not be participating in global requirements, and
 has 3 items outside of requirements.

 Could you list them?


Hi Thomas:

There are 3 python packages that trove currently depends upon (pexpect
in requirements, and wsgi_intercept and mockito in test-requirements)
that are currently not part of the global requirements.

I'm working with the folks on the requirements to figure out a plan to
get these taken care of as we speak. The patch sets in question are:
https://review.openstack.org/#/c/80849
https://review.openstack.org/#/c/80850
https://review.openstack.org/#/c/80851

Cheers,
Nikhil

[...]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-18 Thread Koderer, Marc
 From: Matthew Treinish [mtrein...@kortar.org]
 Sent: Tuesday, March 18, 2014 7:08 PM
 On Tue, Mar 18, 2014 at 12:34:57PM +0100, Koderer, Marc wrote:
  On Tue, 18 Marc 2014 12:00:00 +0100
  Christopher Yeoh [mailto:cbky...@gmail.com] wrote:
   On Tue, 18 Mar 2014 10:39:19 +0100
   Koderer, Marc m.kode...@telekom.de wrote:
   
I just recognized that we have very similar interface definitions in
tempest/api_schema and etc/schema:
   
https://github.com/openstack/tempest/tree/master/etc/schemas
https://github.com/openstack/tempest/tree/master/tempest/api_schema
   
Any objections if I move them to a single location? I'd prefer to use
json as file format instead of .py. As final goal we should find a way
how to merge them competently but I feel like this is something for
the design summit ;)
   
  
   Heh we just moved them but I'm open to other suggestions - they are are
   specific to API testing though aren't they? Long term the idea is that
   they should be generated by Nova rather than tempest.  I think to prevent
   unintentional changes we'd probably cache a copy in tempest though rather
   than dynamically query them.
 
 The idea was never to dynamically query them; there should always be a copy in
 the tempest tree. Like you said to prevent unintentional changes which is the
 same reason we don't auto-discover api versions. The idea for querying nova to
 get the schemas was to enable a tool which could populate the schemas
 automatically so that we didn't have to manually generate them individually. 
 I'd
 say, to a certain extent, that this new round of validation patches could use
 the same kind of tool.
 
 
  Sry that I didn't recognized this review.
  Both definitions are coupled to API testing, yes.
 
  
   My feeling at the moment is that they should  .py files.
   Because I think there's cases where we will want to have some schema
   definitions based on others or share common parts and use bits of python
   code to achieve this. For example availability zone list and detailed
   listing  have a lot in common (detailed listing just has a more
   parameters). I think there'll be similar cases for v2 and v3 versions as
   well.  While we're still manually generating them and keeping them up to
   date I think it's worth sharing as much as we can.
 
  Ok understood. We just converted the negative testing
  definitions to json files due to review findings..
 
 Well, when I left the review comment about it being a json file, I didn't 
 think
 about inheritance. Chris has a good point about reusing common bits and just
 extending it. That wasn't how you proposed the negative test schemas would be
 used which is why I suggested using a raw json file.
  It's just very confusing for new people if they see
  two separate folders with schema definitions.
 
  But unfortunately there isn't an easy way.
 
 Am I missing something or are these schemas being added now just a subset of
 what is being used for negative testing? Why can't we either add the extra
 negative test info around the new test validation patches and get the double
 benefit. Or have the negative test schemas just extend these new schemas being
 added?

Yes, the api_schema files should theoretically be a
subsets of the negative test schemas.
But I don't think that extending them will be possible:

if you have a property definition like this:

properties: {
minRam: {  type: integer,}

how can you extend it to:

properties: {
minRam: {
type: integer,
results: {
gen_none: 400,
gen_string: 400
}

This is the reason why I am unsure how inheritance can solve something here.

 
  
   I agree there's a lot of commonality and we should long term just have one
   schema definition. There's quite a bit to discuss (eg level of strictness
   is currently different) in this area and a summit session about it would
   be very useful.
  
 
  +1
 
 
 I agree there is probably enough here to discuss during a summit session on
 where schema validation fits into tempest. As a part of that discussing how to
 store and manage schema definitions for both the negative test framework and
 validation tests.
 
 
 -Matt Treinish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-18 Thread Louis.Fourie
Mohammad,
  Can you share details on the contract-based policy model?

-  Louis

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Friday, March 14, 2014 3:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][policy] Integrating network policies and 
network services


We have started looking at how the Neutron advanced services being defined and 
developed right now can be used within the Neutron policy framework we are 
building. Furthermore, we have been looking at a new model for the policy 
framework as of the past couple of weeks. So, I have been trying to see how the 
services will fit in (or can be utilized by) the policy work in general and 
with the new contract-based model we are considering in particular. Some of the 
I like to discuss here are specific to the use of service chains with the group 
policy work but some are generic and related to service chaining itself.

If I understand it correctly, the proposed service chaining model requires the 
creation of the services in the chain without specifying their insertion 
contexts. Then, the service chain is created with specifying the services in 
the chain, a particular provider (which is specific to the chain being built) 
and possibly source and destination insertion contexts.

1- This fits ok with the policy model we had developed earlier where the policy 
would get defined between a source and a destination policy endpoint group. The 
chain could be instantiated at the time the policy gets defined. (More 
questions on the instantiation below marked as 1.a and 1.b.) How would that 
work in a contract based model for policy? At the time a contract is defined, 
it's producers and consumers are not defined yet. Would we postpone the 
instantiation of the service chain to the time a contract gets a producer and 
at least a consumer?

1.a- It seems to me, it would be helpful if not necessary to be able to define 
a chain without instantiating the chain. If I understand it correctly, in the 
current service chaining model, when the chain is created, the 
source/destination contexts are used (whether they are specified explicitly or 
implicitly) and the chain of services become operational. We may want to be 
able to define the chain and postpone its creation to a later point in time.

1.b-Is it really possible to stand up a service without knowing its insertion 
context (explicitly defined or implicitly defined) in all cases? For certain 
cases this will be ok but for others, depending on the insertion context or 
other factors such as the requirements of other services in the chain we may 
need to for example instantiate the service (e.g. create a VM) at a specific 
location that is not known when the service is created. If that may be the 
case, would it make sense to not instantiate the services of a chain at any 
level (rather than instantiating them and mark them as not operational or not 
routing traffic to them) before the chain is created? (This leads to question 3 
below.)

2- With one producer and multiple consumers, do we instantiate a chain (meaning 
the chain and the services in the chain become operational) for each consumer? 
If not, how do we deal with using the same source/destination insertion context 
pair for the provider and all of the consumers?

3- For the service chain creation, I am sure there are good reasons for 
requiring a specific provider for a given chain of services but wouldn't it be 
possible to have a generic chain provider which would instantiate each 
service in the chain using the required provider for each service (e.g., 
firewall or loadbalancer service) and with setting the insertion contexts for 
each service such that the chain gets constructed as well? I am sure I am 
ignoring some practical requirements but is it worth rethinking the current 
approach?

Best,

Mohammad
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Constructive Conversations

2014-03-18 Thread Matt Riedemann



On 3/7/2014 1:56 PM, Kurt Griffiths wrote:

Folks,

I’m sure that I’m not the first person to bring this up, but I’d like to
get everyone’s thoughts on what concrete actions we, as a community, can
take to improve the status quo.

There have been a variety of instances where community members have
expressed their ideas and concerns via email or at a summit, or simply
submitted a patch that perhaps challenges someone’s opinion of The Right
Way to Do It, and responses to that person have been far less
constructive than they could have been[1]. In an open community, I don’t
expect every person who comments on a ML post or a patch to be
congenial, but I do expect community leaders to lead by example when it
comes to creating an environment where every person’s voice is valued
and respected.

What if every time someone shared an idea, they could do so without fear
of backlash and bullying? What if people could raise their concerns
without being summarily dismissed? What if “seeking first to
understand”[2] were a core value in our culture? It would not only
accelerate our pace of innovation, but also help us better understand
the needs of our cloud users, helping ensure we aren’t just building
OpenStack in the right way, but also building /the right OpenStack/.

We need open minds to build an open cloud.

Many times, we /do/ have wonderful, constructive discussions, but the
times we don’t cause wounds in the community that take a long time to
heal. Psychologists tell us that it takes a lot of good experiences to
make up for one bad one. I will be the first to admit I’m not perfect.
Communication is hard. But I’m convinced we can do better. We /must/ do
better.

How can we build on what is already working, and make the bad
experiences as rare as possible?

A few ideas to seed the discussion:

  * Identify a set of core values that the community already embraces
for the most part, and put them down “on paper.”[3] Leaders can keep
these values fresh in everyone’s minds by (1) leading by example,
and (2) referring to them regularly in conversations and talks.
  * PTLs can add mentoring skills and a mindset of seeking first to
understand” to their list of criteria for evaluating proposals to
add a community member to a core team.
  * Get people together in person, early and often. Mid-cycle meetups
and mini-summits provide much higher-resolution communication
channels than email and IRC, and are great ways to clear up
misunderstandings, build relationships of trust, and generally get
everyone pulling in the same direction.

What else can we do?

Kurt

[1] There are plenty of examples, going back years. Anyone who has been
in the community very long will be able to recall some to mind. Recent
ones I thought of include Barbican’s initial request for incubation on
the ML, dismissive and disrespectful exchanges in some of the design
sessions in Hong Kong (bordering on personal attacks), and the
occasional “WTF?! This is the dumbest idea ever!” patch comment.
[2] https://www.stephencovey.com/7habits/7habits-habit5.php
[3] We already have a code of conduct
https://www.openstack.org/legal/community-code-of-conduct/ but I think
a list of core values would be easier to remember and allude to in
day-to-day discussions. I’m trying to think of ways to make this idea
practical. We need to stand up for our values, not just /say/ we have them.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Not to detract from what you're saying, but this is 'meh' to me. My 
company has some different kind of values thing every 6 months it seems 
and maybe it's just me but I never really pay attention to any of it.  I 
think I have to put something on my annual goals/results about it, but 
it's just fluffy wording.


To me this is a self-policing community, if someone is being a dick, the 
others should call them on it, or the PTL for the project should stand 
up against it and set the tone for the community and culture his project 
wants to have.  That's been my experience at least.


Maybe some people would find codifying this helpful, but there are 
already lots of wikis and things that people can't remember on a daily 
basis so adding another isn't probably going to help the problem. 
Bully's don't tend to care about codes, but if people stand up against 
them in public they should be outcast.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnetodb] Using gevent in MagnetoDB. OpenStack standards and approaches

2014-03-18 Thread Dmitriy Ukhlov

Hello openstackers,

We are working on MagnetoDB project and trying our best to follow 
OpenStack standards.



So, MagnetoDB is aimed to be high performance scalable OpenStack based 
WSGI application which provide interface to high available distributed 
reliable key-value storage. We investigated best practices and separated 
the next points:


1.

   to avoid problems with GIL our application should be executed in
   single thread mode with non-blocking IO (using greenlets or another
   python specific approaches to rich this)

2.

   to make MagnetoDB scalable it is necessary to make MagnetoDB
   stateless. It allows us run a lot of independent MagnetoDB processes
   and switch all requests flow between them:

1.

   at single node to load all CPU's cores

2.

   at the different nodes for horizontal scalability

3.

   use Cassandra as most reliable and mature distributed key-value storage

4.

   use datastax python-driver as most modern cassandra python client
   which supports newest CQL3 and Cassandra native binary protocol
   features set


So, considering this points The next technologies was chosen:

1.

   gevent as one of the fastest non-blocking single-thread WSGI server.
   It is based on greenlet library and supports monkey patching of
   standard threading library. It is necessary because of datastax
   python driver uses threading library and it's backlog has task to
   add gevent backlog. (We patched python-driver ourselves to enable
   this feature as temporary solution and waiting for new python-driver
   releases). It makes gevent more interesting to use than other
   analogs (like eventlet for example)

2.

   gunicorn as WSGI server which is able to run a few worker processes
   and master process for workers managing and routing request between
   them. Also it has integration with gevent and can run gevent based
   workers. We also analyzed analogues, such as uWSGI. It looks like
   more faster but unfortunately we didn't manage to work uWSGI in
   multi process mode with MagnetoDB application.


Also I want to add that currently oslo wsgi framework is used for 
organizing request routing. I know that current OpenStack trend is to 
migrate WSGI services to Pecan wsgi framework. Maybe is it reasonable 
for MagnetoDB too.



We would like to hear your opinions about the libraries and approaches 
we have chosen and would appreciate you help and support in order to 
find the best balance between performance, developer friendness  and 
OpenStack standards.


--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Pecan Evaluation for Marconi

2014-03-18 Thread Kurt Griffiths
Kudos to Balaji for working so hard on this. I really appreciate his candid 
feedback on both frameworks.

After reviewing his report, I would recommend that Marconi continue using 
Falcon for the v1.1 API and then re-evaluate Pecan for v2.0. Pecan will 
continue to improve over time. We should also look for opportunities to 
contribute to the Pecan ecosystem.

Kurt G. | @kgriffs | Marconi PTL

From: Balaji Iyer balaji.i...@rackspace.commailto:balaji.i...@rackspace.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 18, 2014 at 11:55 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Marconi] Pecan Evaluation for Marconi

I work for Rackspace and Im fairly new to Openstack Ecosystem. Recently, I came 
across an opportunity to evaluate Pecan for Marconi and produce a comprehensive 
report. I have not worked with Pecan or Falcon prior to this evaluation, and 
have no vested interest in these two frameworks.

Evaluating frameworks is not always easy, but I have strived to cover as many 
details as applicable.  I have evaluated Pecan and Falcon only on how it fits 
Marconi and this should not be treated as a general evaluation for all 
products. It is always recommended to evaluate frameworks based on your 
product's requirements and its workload.

Benchmarking is not always easy, hence I have spent a good amount of time 
benchmarking these two frameworks using different tools and under different 
network and load conditions with Marconi. Some of the experiences I have 
mentioned in the report are very subjective and it narrates mine - you may have 
had a different experience with these frameworks, which is totally acceptable.

Full evaluation report is available here - 
https://wiki.openstack.org/wiki/Marconi/pecan-evaluation

Thought of sharing this with the community in the hope that someone may find 
this useful.

Thanks,
Balaji Iyer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-18 Thread Mohammad Banikazemi

Thanks for setting up the meeting.
I would second the request for change of the time slot; Hope to attend this
one and see if we can come up with a better time slot.
With respect to other suggestions, it would be great if we start with a
report on the current state of this work. Something similar to what you
started last week (from what I gather from the logs of the meeting) but
going a bit more into details. I personally would like to see how this fits
in the advanced services framework in general and how we can utilize it
within the group policy framework we are trying to develop.

Best,

Mohammad



From:   Isaku Yamahata isaku.yamah...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Cc: isaku.yamah...@gmail.com
Date:   03/18/2014 05:00 AM
Subject:Re: [openstack-dev] [Neutron] advanced servicevm framework IRC
meeting March 18(Tuesday) 23:00 UTC



Hi Balaji.

Let's discuss/determine on the time at the meeting as it is listed as
agenda.
Sorry for inconvenience for the first time.
Do you have any feedback other than the meeting time?

thanks,

On Tue, Mar 18, 2014 at 06:18:01AM +,
balaj...@freescale.com balaj...@freescale.com wrote:

 Hi Isaku Yamahata,

 Is it possible to have any convenient slot between 4.00 - 6.30 PM - UTC.

 So, that folks from asia can also join the meetings.

 Regards,
 Balaji.P

  -Original Message-
  From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
  Sent: Tuesday, March 18, 2014 11:35 AM
  To: OpenStack Development Mailing List
  Cc: isaku.yamah...@gmail.com
  Subject: [openstack-dev] [Neutron] advanced servicevm framework IRC
  meeting March 18(Tuesday) 23:00 UTC
 
  Hello. This is a reminder for servicevm framework IRC meeting.
  date: March 18 (Tuesday) 23:00 UTC
  channel: #openstack-meeting
 
  the followings are proposed as agenda.
  Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM
 
  * the current status summary
  * decide the time/day/frequency
 
  Thanks,
  --
  Isaku Yamahata isaku.yamah...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?

2014-03-18 Thread Doug Hellmann
On Tue, Mar 18, 2014 at 2:21 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Awesome, great to see this, will try it out :-)

  Is that in the recently released pbr (0.7.0?)



That feature was added in pbr 0.6.

Doug




   From: Doug Hellmann doug.hellm...@dreamhost.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 18, 2014 at 4:51 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Oslo] Next Release of oslo.messaging?




 On Tue, Mar 18, 2014 at 1:37 AM, Angus Salkeld 
 angus.salk...@rackspace.com wrote:

 On 18/03/14 07:39 +0530, Noorul Islam Kamal Malmiyoda wrote:

 On Tue, Mar 18, 2014 at 4:59 AM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Doug Hellmann and Victror Stinner (+ oslo cores),

 Solum currently depends on a py33 gate. We want to use oslo.messaging,
 but are worried that in the current state, we will be stuck without py33
 support. We hope that by adding the Trollius code[1], and getting a new
 release number, that we can add the oslo.messaging library to our
 requirements and proceed with our async messaging plan.

 I am seeking guidance from you on when the above might happen. If it's
 a short time, we may just wait for it. If it's a long time, we may need to
 relax our py33 gate to non-voting in order to prevent breakage of our
 Stackforge CI while we work with the oslo.messaging code. We are also
 considering doing an ugly workaround of creating a bunch of worker
 processes on the same messaging topic until we can clear this concern.

 Thoughts?


 I think we should not make python33 gate non-voting as we will miss
 out issues related others. We can toggle the oslo.messaging related
 tests to not run in python33.


  Even if we disable our tests, we can't even add oslo.messaging to
 our requirements.txt as it can't even be installed.


  Actually, Julien recently added support to pbr for separate requirements
 files for python 3 (requirements-py3.txt and test-requirements-py3.txt). If
 the python 3 file exists, the default file is not used at all, so it is
 possible to list completely different requirements for python 2 and 3.

  Doug




 The only practical solution I can see is to make py33 non-voting until
 oslo.messaging
 can handle py33.

 -Angus



 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stack breakpoint

2014-03-18 Thread Ton Ngo
The notify/callback mechanism seems like a good solution.  This should
enable creating a high level debugger for different DSL (HOT, Tosca, ...),
running as a separate process.  The debugger would attach to a stack,
present a logical model to the user and interact with the Heat engine.
This would be similar to the typical program debugger.   This mechanism
should also allow integrating with a process engine to handle human
interaction.

About the resource with timer, I was not sure if there are other resources
beside WaitCondition that contains a timer, so it's good to know that
currently only WaitCondition falls into this category.  My concern was the
scenario when the timer might get kicked off and then a resource that
should interact with the timer hits a breakpoint, but Zane's point is that
this is not possible for WaitCondition since there is supposed to be a
dependency on the resource.  Then to debug the scenario why did my
WaitCondition fails, the user would set a breakpoint before the
WaitCondition, or after any of the resources that the WaitCondition depends
on.   In this case, we would know that the timer will never get kicked off
because of the dependency.

Ton Ngo,



From:   Zane Bitter zbit...@redhat.com
To: openstack-dev@lists.openstack.org,
Date:   03/18/2014 09:45 AM
Subject:Re: [openstack-dev] [Heat] Stack breakpoint



On 17/03/14 21:18, Mike Spreitzer wrote:
 Zane Bitter zbit...@redhat.com wrote on 03/17/2014 07:03:25 PM:

   On 17/03/14 17:03, Ton Ngo wrote:

- How to handle resources with timer, e.g. wait condition:
   pause/resume
timer value
  
   Handle it by only allowing pauses before and after. In most cases I'm
   not sure what it would mean to pause _during_.

 I'm not sure I follow this part.  If at some time a timer is started,
 and the event(s) upon which it is waiting are delayed by hitting a
 breakpoint and waiting for human interaction --- I think this is the
 scenario that concerned Ton.  It seems to me the right answer is that
 all downstream timers have to stop ticking between break and resume.

Perhaps this was too general. To be specific, there is exactly one
resource with a timer* - a WaitCondition. A WaitCondition is usually
configured to be dependent on the server that should trigger it. Nothing
interesting happens while a WaitCondition is waiting, so there is no
point allowing a break point in the middle. You would either set the
breakpoint after the server has completed or before the WaitCondition
starts (which amount to the same thing, assuming no other dependencies).
You could, in theory, set a breakpoint after the WaitCondition complete,
though the use case for that is less obvious. In any event, at no time
is the stack paused _while_ the WaitCondition is running, and therefore
no need to use anything but wallclock time to determine the timeout.

cheers,
Zane.

* Technically there is another: autoscaling groups during update with an
UpdatePolicy specified... however these use a nested stack, and the
solution here is to use this same feature within the nested stack to
implement the delays rather than complicate things in the stack
containing the autoscaling group resource.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-18 Thread Joshua Harlow
From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 10:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Actions design BP

On 18 Mar 2014, at 01:32, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
To further this lets continue working on 
https://etherpad.openstack.org/p/taskflow-mistral and see if we can align 
somehow

Sure.

(I hope it's not to late to do this,

Never late IMO.

seeing that there appears to be a lot of resistance from the mistral community 
to change.

Could you please let us know how you made this conclusion? I’m frankly 
surprised.. It’s not just my curiosity or something, I want to keep improving 
in what I’m doing.

So maybe its just my misinterpretation or miscommunication, but certain 
discussions like @ 
http://tinyurl.com/m395o2rhttp://lists.openstack.org/pipermail/openstack-dev/2014-March/029983.html
 (mention of 'inalienable parts of Mistral', 'Joshua propose detailed Mistral 
design based on TaskFlow') seem to be what causes me to think that what mistral 
has been making is more of a POC and is actually an implementation. To me that 
means the mistral project is already way past the POC mode (imho POC's are 
meant to explore concepts, then be thrown away and reimplemented as a real 
project, is mistral planning doing this, throwing way the POC and rewriting it 
as a non-POC using the ideas learned from the POC?)[http://tinyurl.com/lbz293s].



Generally, just to clarify the situation let me provide our vision of what 
we’re doing at the very high level.

As mentioned many times, we’re still building a PoC. Yes, it turned out to take 
longer which is totally fine since we’ve done a lot of research, lots of coding 
exercises, talks, discussions with our customers. We’ve involved several new 
contributors from two different companies, they have their requirements and use 
cases too. We’ve gathered a lot of specific requirements to what should be a 
workflow engine. This all was the exact intention of that phase of the project: 
understand better what we should build. If you look at Mistral list of 
blueprints you’ll see around 40 of them where 80-90% of them come from real 
needs of real projects. And not everything is still captured in BPs because 
something is still not shaped well enough in our minds.

Thought #1: In POC we’ve been concentrating on use cases and requirements. 
Implementation has been secondary.

Sure that’s fine that’s what a POC is for. See above.


TaskFlow or anything else just hasn’t mattered a lot so far. But, at the same 
time, I want to remind that in December we tried to use TaskFlow to implement 
the very basic functionality in Mistral (only dependency based model). 
Honestly, we failed to produce a result that we would be satisfied with since 
TaskFlow lacked, for example, the ability to run tasks in an asynchronous 
manner. This was not a problem at all, this is the real world. So I created a 
BP to address that problem in TaskFlow ([0]). So we decided to proceed with it 
with an intent to rejoin later.
And may be even the most important reason not to use TaskFlow was that we did 
want to have a clear research. We found that less productive to try to build a 
project around an existing library than concentrating on use cases and 
high-level requirements. From my experience, it never works well since in your 
thinking you always stick to limitations of that lib and assumptions made in it.

For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8; I'm 
still not sure why u would want to make is_sync/is_async a primitive concept in 
a workflow system, shouldn't this be only up to the entity running the workflow 
to decide? Why is a task allowed to be sync/async, that has major side-effects 
for state-persistence, resumption (and to me is a incorrect abstraction to 
provide) and general workflow execution control, I'd be very careful with this 
(which is why I am hesitant to add it without much much more discussion).


So we actually talked to people a lot (including Josh) and provided this 
reasoning when this question raised again. Reaction was nearly always positive 
and made a lot of sense to customers and developers.

Thought #2: A library shouldn't drive a project where it’s used.

To me this assumes said library is fixed in stone, can't be changed, and can't 
be evolved. If a library is 'dead/vaporware' then sure, I would 100% agree with 
this, but all libraries in openstack do not fit into the later category; and 
those libraries can be evolved/developed/improved. As a community I think it is 
our goal to grow the libraries in the community (not reduce them or avoid them, 

Re: [openstack-dev] [Neutron][ML2]

2014-03-18 Thread racha
Hi Mathieu,
   Sorry I wasn't following the recent progress on ML2, and I was
effectively missing the right abstractions of all MDs in my out of topic
questions.
If I understand correctly, there will be no priority between all MDs
binding the same port, but an optional port filter could also be used so
that the first responding MD matching the filter will assign itself.

Thanks for your answer.

Best Regards,
Racha



On Mon, Mar 17, 2014 at 3:17 AM, Mathieu Rohon mathieu.ro...@gmail.comwrote:

 Hi racha,

 I don't think your topic has something to deal with Nader's topics.
 Please, create another topic, it would be easier to follow.
 FYI, robert kukura is currently refactoring the MD binding, please
 have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
 i understand, there won't be priority between MD that can bind a same
 port. The first that will respond to the binding request will give its
 vif_type.

 Best,

 Mathieu

 On Fri, Mar 14, 2014 at 8:14 PM, racha ben...@gmail.com wrote:
  Hi,
Is it possible (in the latest upstream) to partition the same
  integration bridge br-int into multiple isolated partitions (in terms
 of
  lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
  mechanism driver? And then how can we pass some details to Neutron API
 (as
  in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
  driver to the virtual network? The other alternative I guess is to create
  another integration bridge managed by a different Neutron instance?
 Probably
  I am missing something.
 
  Best Regards,
  Racha
 
 
  On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti nader.laho...@gmail.com
  wrote:
 
  1) Does it mean an interim solution is to have our own plugin (and have
  all the changes in it) and declare it as core_plugin instead of
 Ml2Plugin?
 
  2) The other issue as I mentioned before, is that the extension(s) is
 not
  showing up in the result, for instance when create_network is called
  [result = super(Ml2Plugin, self).create_network(context, network)], and
 as
  a result they cannot be used in the mechanism drivers when needed.
 
  Looks like the process_extensions is disabled when fix for Bug 1201957
  committed and here is the change:
  Any idea why it is disabled?
 
  --
  Avoid performing extra query for fetching port security binding
 
  Bug 1201957
 
 
  Add a relationship performing eager load in Port and Network
 
  models, thus preventing the 'extend' function from performing
 
  an extra database query.
 
  Also fixes a comment in securitygroups_db.py
 
 
  Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
 
   master   h.1
 
  ...
 
   2013.2
 
  commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
 
  Salvatore Orlando salv-orlando authored 8 months ago
 
 
  2  neutron/db/db_base_plugin_v2.py View
 
   @@ -995,7 +995,7 @@ def create_network(self, context, network):
 
  995   'status': constants.NET_STATUS_ACTIVE}
 
  996   network = models_v2.Network(**args)
 
  997   context.session.add(network)
 
  998 -return self._make_network_dict(network)
 
  998 +return self._make_network_dict(network,
  process_extensions=False)
 
  999
 
  1000  def update_network(self, context, id, network):
 
  1001
 
   n = network['network']
 
 
  ---
 
 
  Regards,
  Nader.
 
 
 
 
 
  On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura kuk...@noironetworks.com
 
  wrote:
 
 
  On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
 
  Yes, that sounds good to be able to load extensions from a mechanism
  driver.
 
  But another problem I think we have with ML2 plugin is the list
  extensions supported by default [1].
  The extensions should only load by MD and the ML2 plugin should only
  implement the Neutron core API.
 
 
  Keep in mind that ML2 supports multiple MDs simultaneously, so no
 single
  MD can really control what set of extensions are active. Drivers need
 to be
  able to load private extensions that only pertain to that driver, but
 we
  also need to be able to share common extensions across subsets of
 drivers.
  Furthermore, the semantics of the extensions need to be correct in the
 face
  of multiple co-existing drivers, some of which know about the
 extension, and
  some of which don't. Getting this properly defined and implemented
 seems
  like a good goal for juno.
 
  -Bob
 
 
 
  Any though ?
  Édouard.
 
  [1]
 
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
 
 
 
  On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.com
 wrote:
 
  Hi,
 
  I think it is better to continue the discussion here. It is a good log
  :-)
 
  Eugine and I talked the related topic to allow drivers to load
  extensions)  in Icehouse Summit
  but I could not have enough time to work on it during Icehouse.
  I am still interested in implementing it and will register a blueprint
  on it.
 
  etherpad in icehouse summit has baseline 

Re: [openstack-dev] MuranoPL questions?

2014-03-18 Thread Joshua Harlow
Sure, I understand how this could make it harder.

Its a hard question to answer, which one is more worth it, creating a bunch of 
DSL's that u now have to implement correctly in a runtime that is actually 
pretty hard to isolate/control the execution of, or should people bite the 
bullet and move to something that actually has a runtime that provides these 
features. Since it appears the DSL's that are being created are nearly turing 
complete it appears inevitable that said runtime will be needed (but of course 
I'm all for being proven wrong).

-Josh

From: Zane Bitter zbit...@redhat.commailto:zbit...@redhat.com
Organization: Red Hat
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 18, 2014 at 9:36 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

On 18/03/14 08:01, Ruslan Kamaldinov wrote:
Joshua, Clint,

The only platform I'm aware about which fully supports true isolation and which
has been used in production for this purpose is Java VM. I know people who
developed systems for online programming competitions and really smart kids
tried to break it without any luck :)

Since we're speaking about Heat, Mistral and Murano DSLs and all of them need an
execution engine. Do you think that JVM could become a host for that engine?

-2. Deploying OpenStack is hard enough already.

JVM has a lot of potential:
- it allows to fine-tune security manager to allow/disallow specific actions
- it can execute a lot of programming languages - Python, Ruby, JS, etc
- it has been used in production for this specific purpose for years

But it also introduces another layer of complexity:
- it's another component to deploy, configure and monitor
- it's non-python, which means it should be supported by infra
- we will need to run java service and potentially have some java code to
accept and process user code


Thanks,
Ruslan

On Mon, Mar 17, 2014 at 10:40 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
So I guess this is similar to the other thread.

http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html

I know that the way YQL has provided it could be a good example; where the
core DSL (the select queries and such) are augmented by the addition and
usage of JS, for example
http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
(ignore that its XML, haha). Such usage already provides rate-limits and
execution-limits
(http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html) and
afaik if something like what YQL is doing then u don't need to recreate
simialr features in your DSL (and then u also don't need to teach people
about a new language and syntax and ...)

Just an idea (I believe lua offers similar controls/limits.., although its
not as popular of course as JS).

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com

Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 3:59 AM

To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Joshua,

Completely agree with you. We wouldn't be writing another language if we
knew how any of existing languages can be used for this particular purpose.
If anyone suggest such language and show how it can be used to solve those
issues DSL was designed to solve we will consider dropping MuranoPL. np

Surely DSL hasn't stood the test of time. It just hasn't had a chance yet.
100% of successful programming languages were in such position once.

Anyway it is the best time to come with your suggestions. If you know how
exactly DSL can be replaced or improved we would like you to share


On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
wrote:

I guess I might be a bit biased to programming; so maybe I'm not the
target audience.

I'm not exactly against DSL's, I just think that DSL's need to be really
really proven to become useful (in general this applies to any language that
'joe' comp-sci student can create). Its not that hard to just make one, but
the real hard part is making one that people actually like and use and
survives the test of time. That's why I think its just nicer to use
languages that have stood the test of time already (if we can), creating a
new DSL (muranoPL seems to be slightly more than a DSL imho) means creating
a new language that has not stood the test of time (in terms of lifetime,
battle tested, supported over years) so that's just the concern I have.

Of course we have to 

Re: [openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode

2014-03-18 Thread Robert Collins
On 15 March 2014 13:07, Devananda van der Veen devananda@gmail.com wrote:
 +1 to the idea.

 However, I think we should discuss whether the rescue interface is the
 appropriate path. It's initial intention was to tie into Nova's rescue
 interface, allowing a user whose instance is non-responsive to boot into a
 recovery mode and access the data stored within their instance. I think
 there are two different use-cases here:

 Case A: a user of Nova who somehow breaks their instance, and wants to boot
 into a rescue or recovery mode, preserving instance data. This is useful
 if, eg, they lost network access or broke their grub config.

 Case B: an operator of the baremetal cloud whose hardware may be
 malfunctioning, who wishes to hide that hardware from users of Case A while
 they diagnose and fix the underlying problem.

 As I see it, Nova's rescue API (and by extension, the same API in Ironic) is
 intended for A, but not for B.  TripleO's use case includes both of these,
 and may be conflating them.

I agree.

 I believe Case A is addressed by the planned driver.rescue interface. As for
 Case B, I think the solution is to use different tenants and move the node
 between them. This is a more complex problem -- Ironic does not model
 tenants, and AFAIK Nova doesn't reserve unallocated compute resources on a
 per-tenant basis.

 That said, I think we will need a way to indicate this bare metal node
 belongs to that tenant, regardless of the rescue use case.

I'm not sure Ironic should be involved in scheduling (and giving a
node to a tenant is a scheduling problem).

If I may sketch an alternative - when a node is put into maintenance
mode, keep publishing it to the scheduler, but add an extra spec to it
that won't match any request automatically.

Then 'deploy X to a maintenance node machine' is simple nove boot with
a scheduler hint to explicitly choose that machine, and all the
regular machinery will take place.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] How to reliably detect VM failures?

2014-03-18 Thread Zane Bitter

On 18/03/14 12:42, Steven Dake wrote:

On 03/18/2014 07:54 AM, Qiming Teng wrote:

Hi, Folks,

   I have been trying to implement a HACluster resource type in Heat. I
haven't created a BluePrint for this because I am not sure everything
will work as expected.

   The basic idea is to extend the OS::Heat::ResourceGroup resource type
with inner resource types fixed to be OS::Nova::Server.  Properties for
this HACluster resource may include:

   - init_size: initial number of Server instances;
   - min_size: minimal number of Server instances;
   - sig_handler: a reference to a sub-class of SignalResponder;
   - zones: a list of strings representing the availability zones, which
   could be a names of the rack where the Server can be booted;
   - recovery_action: a list of supported failure recovery actions, such
   as 'restart', 'remote-restart', 'migrate';
   - fencing_options: a dict specifying what to do to shutdown the Server
   in a clean way so that data consistency in storage and network are
   reserved;
   - resource_ref: a dict for defining the Server instances to be
   created.

   Attributes of the HACluster may include:
   - refs: a list of resource IDs for the currently active Servers;
   - ips: a list of IP addresses for convenience.

   Note that the 'remote-restart' action above is today referred to as
'evacuate'.

   The most difficult issue here is to come up with a reliable VM failure
detection mechanism.  The service_group feature in Nova only concerns
about the OpenStack services themselves, not the VMs.  Considering that
in our customer's cloud environment, user provided images can be used,
we cannot assume some agents in the VMs to send heartbeat signals.

   I have checked the 'instance' table in Nova database, it seemed that
the 'update_at' column is only updated when VM state changed and
reported.  If the 'heartbeat' messages are coming in from many VMs very
frequently, there could be a DB query performance/scalability issue,
right?

   So, how can I detect VM failures reliably, so that I can notify Heat
to take the appropriate recovery action?

Qiming,

Check out

https://github.com/openstack/heat-templates/blob/master/cfn/F17/WordPress_Single_Instance_With_HA.template


You should be able to use the HARestarter resource and functionality to
do healthchecking of a vm.


HARestarter is actually pretty problematic, both in a causes major 
architectural headaches for Heat and will probably be deprecated very 
soon sense and a may do very unexpected things to your resources 
sense. I wouldn't recommend it.


cheers,
Zane.


It would be cool if nova could grow a feature to actively look at the
vm's state internally and determine if it was healthy (eg look at its
memory and see if the scheduler is running, things like that) but this
would require individual support from each hypervisor for such
functionality.

Until that happens, healthchecking from within the vm seems like the
only reasonable solution.

Regards
-steve


Regards,
   - Qiming

Research Scientist
IBM Research - China
tengqim at cn dot ibm dot com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Constructive Conversations

2014-03-18 Thread Chris Behrens

On Mar 18, 2014, at 11:57 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 […]
 Not to detract from what you're saying, but this is 'meh' to me. My company 
 has some different kind of values thing every 6 months it seems and maybe 
 it's just me but I never really pay attention to any of it.  I think I have 
 to put something on my annual goals/results about it, but it's just fluffy 
 wording.
 
 To me this is a self-policing community, if someone is being a dick, the 
 others should call them on it, or the PTL for the project should stand up 
 against it and set the tone for the community and culture his project wants 
 to have.  That's been my experience at least.
 
 Maybe some people would find codifying this helpful, but there are already 
 lots of wikis and things that people can't remember on a daily basis so 
 adding another isn't probably going to help the problem. Bully's don't tend 
 to care about codes, but if people stand up against them in public they 
 should be outcast.

I agree with the goals and sentiment of Kurt’s message. But, just to add a 
little to Matt’s reply: Let’s face it. Everyone has a bad day now and then. 
It’s easier for some people to lose their cool over others. Nothing’s going to 
change that.

- Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Offset support in REST API pagination

2014-03-18 Thread Jay Pipes
On Tue, 2014-03-18 at 13:30 -0500, Steven Kaufer wrote:
 Jay Pipes jaypi...@gmail.com wrote on 03/18/2014 12:02:50 PM:
 
  From: Jay Pipes jaypi...@gmail.com
  To: openstack-dev@lists.openstack.org, 
  Date: 03/18/2014 12:15 PM
  Subject: Re: [openstack-dev] Offset support in REST API pagination
  
  On Tue, 2014-03-18 at 11:31 -0500, Steven Kaufer wrote:
   First, here is some background on this topic:
http://www.gossamer-threads.com/lists/openstack/dev/2777
   
   Does anyone have any insight as to why offset is not supported in
 the
   REST API calls that support pagination?   I realize that there are
   tradeoffs when using a offset (vs. marker) but I believe that
 there is
   value in supporting both.  For example, if you want to jump to the
   n-th page of data without having to traverse all of the previous
   pages.
   
   Is there a reason why the APIs do not support either a marker or
 an
   offset (obviously not both) on the API request?  It appears that
   sqlalchemy has offset support.
   
   Also, it seems that cinder at least looks for the offset parameter
   (but ignores it).  Does this mean that it was supported at one
 time
   but later the support was removed?
https://github.com/openstack/cinder/blob/master/cinder/api/v2/
  volumes.py#L214
   
   Thanks for the information.
  
  Hail to thee, stranger! Thou hast apparently not passed into the
 cave of
  marker/offset before!
  
  I humbly direct you to buried mailing list treasures which shall
  enlighten you!
  
  This lengthy thread shall show you how yours truly was defeated in
  written combat by the great Justin Santa Barbara, who doth exposed
 the
  perils of the offset:
  
  http://www.gossamer-threads.com/lists/openstack/dev/2803
  
  A most recent incantation of the marker/offset wars is giveth here:
  
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018861.html
  
  Best of days to you,
  -jay
 
 Jay:
 
 Thanks for the feedback and the history on this topic. 

No probs. Thanks for indulging my Renaissance binge this morning.

 I understand that the limit/marker
 approach is superior when simply traversing all of the pages. However,
 consider the
 following:
 
 - User knows that there are 1000 items (VMs, volumes, images, really
 doesn't matter)
 - User knows that the item that they want is in roughly the middle of
 the data set (assume
 everything is sorted by display name)
 - User cannot remember the exact name so a filter will not help and
 changing the sort 
 direction will not help (since the item they want it is in the middle
 of the dataset)
 - User supplies an offset of 500 to jump into the middle of the data
 set
 - User then uses the marker approach to traverse the pages from this
 point to find the
 item that they want
 
 In this case the offset approach is not used to traverse pages so
 there are no issues with
 missing an item or seeing a duplicate.

I guess I wonder how common that use case is, actually. I can't remember
ever running into a user who asked for such a thing, but perhaps that's
just me.

 Why couldn't the APIs support either marker or offset on a given
 request? 

I think for two reasons:

1) Would be potentially quite confusing to the user
2) We're lazy? :)

 Also, to encourage
 the use of marker instead of offset, the next/previous links on any
 request with an offset
 supplied should contain the appropriate marker key values -- this
 should help discourage
 simply increasing the offset when traversing the pages.

Well, yes, this already happens today in the projects that support
pagination, AFAIK.

Best,
-jay

 I realize that if only one solution had to be chosen, then
 limit/marker would always win
 this war. But why can't both be supported?
 
 Thanks,
 
 Steven Kaufer
 
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-18 Thread Georgy Okrokvertskhov
Hi Zane,

Thank you for your feedback. This is very important for us to find out how
our approach can be aligned with existing solutions. Let me try to explain
how we come up to specific solutions.


It does seem really awkward to me (and not just because of all the
$signs), because it's duplicating basically all of the functionality of
Heat. e.g. in MuranoPL you have:
Properties:
  name:
Contract: $.string().notNull()

whereas in HOT this would be:

parameters:
  name:
type: string
constraints:
  - length: {min: 1}

First of all parameters are the common entities in all different services.
Some of them use config files, some of them use yaml formats. I don't see
that this is a significant overlap with Heat. While Murano parameters can
be directly passed to Heat template generated, there is a major difference
between Murano parameters and Heat parameters. We use these parameters to
bind different entities together. The good example is here [1][2]

Properties:
  name:
Contract: $.string().notNull()

  primaryController:
Contract: $.class(PrimaryController).notNull()

  secondaryControllers:
Contract: [$.class(SecondaryController).notNull()]

As you see, parameters here are used to bind different applications as
PrimaryController and Secondary controllers are different VMs with
different configurations.

Murano can't use Heat parameters for that as we need to bind
applications before Heat template is generated. If we even try to use
Heat for keeping such parameters it will lead to Murano parsing and
processing Heat templates which we want to avoid.


Looking at e.g. [1], more  or less everything in here can be done already 
inside a Heat template, using get_file and str_replace.


This definitely can be done inside Heat template, but these operations
(str_replace) are not specific to Heat. As we use these variables
before HOT template creation it is better to do string operations
inside the service then pass this to another service. As we can't
expose python functions for that we just wrap them in our DSL syntax
which is just a couple lines of Python code around pythons string
methods.


It sounds like this is a DSL in which you write everything imperatively,
then it gets converted behind the scenes into a declarative model in a
completely different language (but not using any of the advanced features
of that language) and passed to Heat, which turns it back into a workflow
to execute. That seems bizarre to me. Surely Murano should be focusing on
filling the gaps in Heat, rather than reimplementing it in a different
paradigm?

We will be very happy to fill gaps in the HOT syntax to extend it. At the
same time, I need to mention, that the picture you drew does not perfectly
reflects what we are doing. Murano uses its own definition of application
which covers the specific aspects of application - actions and application
binding. As a part of application definition we store Heat snippets as
files. We don't hide anything from Heat under the Murano layer. All
features available in Heat can be used in Murano as user has an ability to
add Heat template to the application definition. As Heat main feature is to
manage different resource we pass this tasks to Heat. Murano does not do
any kind of infrastructure resource management outside of the Heat.

Heat template generation is just a first step in Application life. Murano
provides a way to define workflows for different aspects
of application which can be invoked by Heat during generation or out of
Heat by Mistral, Ceilometer and other services, including Murano.

What I'm imagining here is something along the lines of:
- Heat implements hooks to customise its workflows, as proposed in [2],
[3].
- Deployments are defined declaratively using HOT syntax.
- Workflows - to wrap the deployment operations, to customise the
deployment and to perform lifecycle operations like backups - are defined
using a Mistral DSL (I assume this exists already? I haven't looked into
it).
- Murano provides a way to bundle the various workflow definitions, HOT
models, and other files into an application package.

Murano is not focused on deployment itself. As soon as we have HOT Murano
is responsible to generate proper HOT template to make actual deployments.
As soon as HOT Software components are ready, application publisher will
use them in Heat templates.

Can anybody enlighten me as to what features would be missing from this
that would warrant creating a new programming language?

I see this in the following way - who will generate HOT template for my
complex multi-tier applications when I have only templates for components?
It looks like I will have to write a new template by myself understanding
all possible relation between components and their parameters. Then I have
to add some specific workflows to Mistral and somehow bind them to the
resources deployed by Heat. Then I have to understand what kind of
notifications to expect and somehow react on them 

Re: [openstack-dev] [Neutron][ML2]

2014-03-18 Thread Robert Kukura


On 3/18/14, 3:04 PM, racha wrote:

Hi Mathieu,
 Sorry I wasn't following the recent progress on ML2, and I was 
effectively missing the right abstractions of all MDs in my out of 
topic questions.
If I understand correctly, there will be no priority between all MDs 
binding the same port, but an optional port filter could also be 
used so that the first responding MD matching the filter will assign 
itself.

Hi Racha,

The bug fix Mathieu referred to below that I am working on will move the 
attempt to bind outside the DB transaction that triggered the 
[re]binding, and thus will involve a separate DB transaction to commit 
the result of the binding. But the basic algorithm for binding ports 
will not be changing as part of this fix. The bind_port() method is 
called sequentially on each mechanism driver in the order they are 
listed in the mechanism_drivers config variable, until one succeeds in 
binding the port, or all have failed to bind the port. Since this will 
now be happening outside a DB transaction, its possible that more than 
one thread could simultaneously try to bind the same port, and this 
concurrency is handled by having all such threads use the result that 
gets committed first.


-Bob



Thanks for your answer.

Best Regards,
Racha



On Mon, Mar 17, 2014 at 3:17 AM, Mathieu Rohon 
mathieu.ro...@gmail.com mailto:mathieu.ro...@gmail.com wrote:


Hi racha,

I don't think your topic has something to deal with Nader's topics.
Please, create another topic, it would be easier to follow.
FYI, robert kukura is currently refactoring the MD binding, please
have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
i understand, there won't be priority between MD that can bind a same
port. The first that will respond to the binding request will give its
vif_type.

Best,

Mathieu

On Fri, Mar 14, 2014 at 8:14 PM, racha ben...@gmail.com
mailto:ben...@gmail.com wrote:
 Hi,
   Is it possible (in the latest upstream) to partition the same
 integration bridge br-int into multiple isolated partitions
(in terms of
 lvids ranges, patch ports, etc.) between OVS mechanism driver
and ODL
 mechanism driver? And then how can we pass some details to
Neutron API (as
 in the provider segmentation type/id/etc) so that ML2 assigns a
mechanism
 driver to the virtual network? The other alternative I guess is
to create
 another integration bridge managed by a different Neutron
instance? Probably
 I am missing something.

 Best Regards,
 Racha


 On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti
nader.laho...@gmail.com mailto:nader.laho...@gmail.com
 wrote:

 1) Does it mean an interim solution is to have our own plugin
(and have
 all the changes in it) and declare it as core_plugin instead of
Ml2Plugin?

 2) The other issue as I mentioned before, is that the
extension(s) is not
 showing up in the result, for instance when create_network is
called
 [result = super(Ml2Plugin, self).create_network(context,
network)], and as
 a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug
1201957
 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 998 -return self._make_network_dict(network)

 998 +return self._make_network_dict(network,
 process_extensions=False)

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']


 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura
kuk...@noironetworks.com mailto:kuk...@noironetworks.com
 wrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a
mechanism
 driver.

 But another problem I 

  1   2   >