Re: [openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-07 Thread Yuriy Taraday
On Fri, Mar 7, 2014 at 11:20 AM, Yuriy Taraday  wrote:

> All in all it sounds like an eventlet bug. I'm not sure how it can be
> dealt with though.
>

Digging into it I found out that eventlet uses time.time() by default that
is not monotonic. There's no clear way to replace it, but you can
workaround this:
1. Get monotonic clock function here:
http://stackoverflow.com/a/1205762/238308 (note that for FreeBSD or MacOS
you'll have to use different constant).
2. Make eventlet's hub use it:
eventlet.hubs._threadlocal.hub =
eventlet.hubs.get_default_hub().Hub(monotonic_time)

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-07 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, Mar 05, 2014 at 10:36:22PM +, Lyle, David wrote:
> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
> very insightful and more importantly have come to rely on their quality. He 
> has contributed to several areas in Horizon and he understands the code base 
> well.  Radomir is also very active in tuskar-ui both contributing and 
> reviewing.
> 
+1 from me, I fully support this. Radomir has done a impressive job
and his reviews and contributions have been good since he started.

Matthias
- -- 
Matthias Runge 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJTGXu6AAoJEOnz8qQwcaIWoesH/0pP7daAV2xqynR4zmkQ5QnU
7xcXKWQES/3C0+4YPF4GROvqyRsRtviOnPSkKDDE7W1+6lrZ3/Zc5axqrN5SWkjf
V1lmNzriHgAOqo9CyXT0JtrrzkILcQ9sFyuGYaHg1iEpa8D6oXC2bwOOWkwGXBXZ
0lr74B76z46XxR6iHx9WSja02SvySZshIlnf9bJQZtBMDcf1zQq0tPEPLSvkPfeN
fNLVWj2+PCS4Z6ww8/a4D09fnxf31a2ziG0Anl8aiSj6KuQSlG+FOGv1WfcgLySB
Pz1MsFvj5J7pF2AYcD2uXGQaWL3DSY6XnFDPcYJ+5KwdjMySJVQiXXG1jTo0wZE=
=aUPs
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-07 Thread Akihiro Motoki
+1 from me too.
Having a core member who is familiar with horizon and tuskar-ui code
base encourages the integration more!

Akihiro

On Fri, Mar 7, 2014 at 4:56 PM, Matthias Runge  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Wed, Mar 05, 2014 at 10:36:22PM +, Lyle, David wrote:
>> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his 
>> reviews very insightful and more importantly have come to rely on their 
>> quality. He has contributed to several areas in Horizon and he understands 
>> the code base well.  Radomir is also very active in tuskar-ui both 
>> contributing and reviewing.
>>
> +1 from me, I fully support this. Radomir has done a impressive job
> and his reviews and contributions have been good since he started.
>
> Matthias
> - --
> Matthias Runge 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJTGXu6AAoJEOnz8qQwcaIWoesH/0pP7daAV2xqynR4zmkQ5QnU
> 7xcXKWQES/3C0+4YPF4GROvqyRsRtviOnPSkKDDE7W1+6lrZ3/Zc5axqrN5SWkjf
> V1lmNzriHgAOqo9CyXT0JtrrzkILcQ9sFyuGYaHg1iEpa8D6oXC2bwOOWkwGXBXZ
> 0lr74B76z46XxR6iHx9WSja02SvySZshIlnf9bJQZtBMDcf1zQq0tPEPLSvkPfeN
> fNLVWj2+PCS4Z6ww8/a4D09fnxf31a2ziG0Anl8aiSj6KuQSlG+FOGv1WfcgLySB
> Pz1MsFvj5J7pF2AYcD2uXGQaWL3DSY6XnFDPcYJ+5KwdjMySJVQiXXG1jTo0wZE=
> =aUPs
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-07 Thread Yuzhou (C)
Hi Joe,

Thanks for your reply!

First, generally, in public or private cloud, the end users of VMs have 
no right to create new VMs directly.
If someone want to create new VMs, he or she need to wait for approval process. 
Then, the administrator Of cloud create a new VM to applicant. So the workflow 
that you suggested is not convenient.

Second, The feature that disk rollback automatically not based on 
inter-VM software ,I am sure it is very useful. 
For example, in VDI, a VM maybe is assigned to uncertain people. Many people 
have right to use it. 
Someone maybe install virus in this VM. Based on security considerations, after 
one use VM, disk is hope to rollback to clean state.
then others can use it safely.

@stackers,
Let's discuss how to implement this feature!Is there any other 
suggestions?

Best regards,

Zhou Yu


> -Original Message-
> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> Sent: Friday, March 07, 2014 2:21 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> stopping VM, data will be rollback automatically), do you think we shoud
> introduce this feature?
> 
> On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
> > Hi Joe,
> > For example, I used to use a private cloud system, which will
> > calculate charge bi-weekly. and it charging formula looks like
> > "Total_charge =
> > Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
> > Volume_number*C4".  Those Instance/Image/Volume number are the
> number
> > of those objects that user created within these two weeks. And it also
> > has quota to limit total image size and total volume size. That
> > formula is not very exact, but you can see that it regards each of my
> > 'create' operation ass a 'ticket', and will charge all those tickets,
> > plus the instance duration
> 
> Charging for creating a VM creation is not very cloud like.  Cloud instances
> should be treated as ephemeral and something that you can throw away and
> recreate at any time.  Additionally cloud should charge on resources used
> (instance CPU hour, network load etc), and not API calls (at least in any
> meaningful amount).
> 
> > fee. In order to reduce the expense of my department, I am asked not
> > to create instance very frequently, and not to create too many images
> > and volume. The image quota is not very big. And I would never be
> > permitted to exceed the quota, since it request additional dollars.
> >
> >
> > On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon 
> wrote:
> >>
> >> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao 
> wrote:
> >> > Hi Joe,
> >> > If we assume the user is willing to create a new instance, the
> >> > workflow you are saying is exactly correct. However, what I am
> >> > assuming is that the user is NOT willing to create a new instance.
> >> > If Nova can revert the existing instance, instead of creating a new
> >> > one, it will become the alternative way utilized by those users who
> >> > are not allowed to create a new instance.
> >> > Both paths lead to the target. I think we can not assume all the
> >> > people should walk through path one and should not walk through
> >> > path two. Maybe creating new instance or adjusting the quota is
> >> > very easy in your point of view. However, the real use case is
> >> > often limited by business process.
> >> > So I
> >> > think we may need to consider that some users can not or are not
> >> > allowed to creating the new instance under specific circumstances.
> >> >
> >>
> >> What sort of circumstances would prevent someone from deleting and
> >> recreating an instance?
> >>
> >> >
> >> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon
> 
> >> > wrote:
> >> >>
> >> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao 
> wrote:
> >> >> > Hi Joe, my meaning is that cloud users may not hope to create
> >> >> > new instances or new images, because those actions may require
> >> >> > additional approval and additional charging. Or, due to
> >> >> > instance/image quota limits, they can not do that. Anyway, from
> >> >> > user's perspective, saving and reverting the existing instance
> >> >> > will be preferred sometimes. Creating a new instance will be
> >> >> > another story.
> >> >> >
> >> >>
> >> >> Are you saying some users may not be able to create an instance at
> >> >> all? If so why not just control that via quotas.
> >> >>
> >> >> Assuming the user has the power to rights and quota to create one
> >> >> instance and one snapshot, your proposed idea is only slightly
> >> >> different then the current workflow.
> >> >>
> >> >> Currently one would:
> >> >> 1) Create instance
> >> >> 2) Snapshot instance
> >> >> 3) Use instance / break instance
> >> >> 4) delete instance
> >> >> 5) boot new instance from snapshot
> >> >> 6) goto step 3
> >> >>
> >> >> From what I gather you are saying that instead of 4/5 you want the
> >> >> user to be able to just reb

[openstack-dev] [neutron]The duplicate name of security group

2014-03-07 Thread 黎林果
Hi stackers,


The duplicate name of sg may cause some problems, do you think so?

https://bugs.launchpad.net/neutron/+bug/1289195

Thanks!

Lee Li
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Miguel Angel Ajo


I'm really happy to see that I'm not the only one concerned about 
performance.



I'm reviewing the thread, and summarizing / replying to multiple people 
on the thread:



Ben Nemec,

* Thanks for pointing us to the previous thread about this topic:
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html


Rick Jones,

* iproute commit  f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8  upstream,
I have to check if it's on my system.

* Very interesting investigation about sudo:

http://www.sudo.ws/repos/sudo/rev/e9dc28c7db60 this is as important
as the bottleneck in rootwrap when you start having lots of interfaces.
Good catch!,

* To your question: my times are only from neutron-dhcp-agent & 
neutron-l3-agent start, to completion, system boot times are excluded 
from the

measurement (That's <1min).

* About the Linux networking folks not exposing API interfaces to avoid
lock in, in the end they're already locked in with the cmd api 
interface, if they made an API at the same level, it shouldn't be that 
bad... but of course, it's not free...



Joe Gordon,

* yes, pypy start time is too slow, and I must definitely investigate 
about the RPython toolchain.


* Ideally, I agree, that a an automated py->C solution would be
the best from the openstack project point of view. Have you had any
experience with that using such toolchain? Could you point me to some 
example?


* shedskin seems to do this kind of translation, for a limited python
subset, which would mean rewriting rootwrap's python to accommodate
such limitations.


If no tool offers the translation we need, or if the result is slow:

I'm not against a rewrite of rootwrap in C/C++, if we have developers
on the project, with C/C++ experience, specially related to security.
I have such experience, and I'm sure there are more around (even if
not all openstack developers talk C). But, that doesn't exclude that
we maintain a rootwrap in python to foster innovation around the tool.
(here I agree with Vishvananda Ishaya)


Solly Ross,
 I haven't tried cython, but I will check it in a few minutes.


Iwamoto Toshihiro,

 Thanks for pointing us to "ip netns exec" too, I wonder if that's
releated to the iproute upstream change Rick Jones was talking about.


Cheers,
Miguel Ángel.






On 03/06/2014 09:31 AM, Miguel Angel Ajo wrote:


On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:

At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Try benchmarking

$ sudo ip netns exec qfoobar /bin/echo


You're totally right, that takes the same time as rootwrap itself. It's
another point to think about from the performance point of view.

An interesting read:
http://man7.org/linux/man-pages/man8/ip-netns.8.html

ip netns does a lot of mounts around to simulate a normal environment,
where an netns-aware application could avoid all this.



Network namespace switching costs almost as much as a rootwrap
execution, IIRC.

Execution coalesceing is not enough in this case and we would need to
change how Neutron issues commands, IMO.


Yes, one option could be to coalesce all calls that go into
a namespace into a shell script and run this in the
ootwrap > ip netns exec

But we might find a mechanism to determine if some of the steps failed,
and what was the result / output, something like failing line + result
code. I'm not sure if we rely on stdout/stderr results at any time.







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bugs that needs to be reviewed

2014-03-07 Thread sahid
Greetings,

There are two fixes for bugs that need to be reviewed. One for
the feature shelve instance and the other one for the API to get
the list of migrations in progress.

These two bugs are marked to high and medium because they broke
feature. The code was push several months ago, if some cores can
take a look.

Fix: Unshelving an instance uses original image
https://review.openstack.org/#/c/72407/

Fix: Fix unicode error in os-migrations
https://review.openstack.org/#/c/61717/

Thanks,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][FFE] Cinder switch-over to oslo.messaging

2014-03-07 Thread Thierry Carrez
Flavio Percoco wrote:
> On 06/03/14 11:50 +0100, Thierry Carrez wrote:
>> So on one hand this is a significant change that looks like it could
>> wait (little direct feature gain). On the other we have oslo.messaging
>> being adopted in a lot of projects, and we reduce the maintenance
>> envelope if we switch most projects to it BEFORE release.
>>
>> This one really boils down to how early it can be merged. If it's done
>> before the meeting next Tuesday, it's a net gain. If not, it becomes too
>> much of a distraction from bugfixes for reviewers and any regression it
>> creates might get overlooked.
> 
> FWIW, I just rebased it earlier today and the patch could be merged
> today if it gets enough reviews.

Discussed it with John and confirmed the exception. Go for it !

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-07 Thread Thierry Carrez
Steven Dake wrote:
> I'm a bit confused as well as to how a incubated project would be
> differentiated from a integrated project in one program.  This may have
> already been discussed by the TC.  For example, Red Hat doesn't
> officially support incubated projects, but we officially support (with
> our full sales/training/documentation/support/ plus a whole bunch of
> other Red Hat internalisms) Integrated projects.  OpenStack vendors need
> a way to let customers know (through an upstream page?) what a project
> in a specific program's status is so we can appropriately set
> expectations with the community and  customers.

The reference list lives in the governance git repository:

http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-07 Thread Édouard Thuleau
Yes, that sounds good to be able to load extensions from a mechanism driver.

But another problem I think we have with ML2 plugin is the list extensions
supported by default [1].
The extensions should only load by MD and the ML2 plugin should only
implement the Neutron core API.

Any though ?
Édouard.

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:

> Hi,
>
> I think it is better to continue the discussion here. It is a good log :-)
>
> Eugine and I talked the related topic to allow drivers to load
> extensions)  in Icehouse Summit
> but I could not have enough time to work on it during Icehouse.
> I am still interested in implementing it and will register a blueprint on
> it.
>
> etherpad in icehouse summit has baseline thought on how to achieve it.
> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
> I hope it is a good start point of the discussion.
>
> Thanks,
> Akihiro
>
> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
> wrote:
> > Hi Kyle,
> >
> > Just wanted to clarify: Should I continue using this mailing list to
> post my
> > question/concerns about ML2? Please advise.
> >
> > Thanks,
> > Nader.
> >
> >
> >
> > On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
> > wrote:
> >>
> >> Thanks Edgar, I think this is the appropriate place to continue this
> >> discussion.
> >>
> >>
> >> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana 
> wrote:
> >>>
> >>> Nader,
> >>>
> >>> I would encourage you to first discuss the possible extension with the
> >>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
> meeting
> >>> every week:
> >>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
> >>>
> >>> Bring your concerns on this meeting and get the right feedback.
> >>>
> >>> Thanks,
> >>>
> >>> Edgar
> >>>
> >>> From: Nader Lahouti 
> >>> Reply-To: OpenStack List 
> >>> Date: Thursday, March 6, 2014 12:14 PM
> >>> To: OpenStack List 
> >>> Subject: Re: [openstack-dev] [Neutron][ML2]
> >>>
> >>> Hi Aaron,
> >>>
> >>> I appreciate your reply.
> >>>
> >>> Here is some more details on what I'm trying to do:
> >>> I need to add new attribute to the network resource using extensions
> >>> (i.e. network config profile) and use it in the mechanism driver (in
> the
> >>> create_network_precommit/postcommit).
> >>> If I use current implementation of Ml2Plugin, when a call is made to
> >>> mechanism driver's create_network_precommit/postcommit the new
> attribute is
> >>> not included in the 'mech_context'
> >>> Here is code from Ml2Plugin:
> >>> class Ml2Plugin(...):
> >>> ...
> >>>def create_network(self, context, network):
> >>> net_data = network['network']
> >>> ...
> >>> with session.begin(subtransactions=True):
> >>> self._ensure_default_security_group(context, tenant_id)
> >>> result = super(Ml2Plugin, self).create_network(context,
> >>> network)
> >>> network_id = result['id']
> >>> ...
> >>> mech_context = driver_context.NetworkContext(self, context,
> >>> result)
> >>>
> self.mechanism_manager.create_network_precommit(mech_context)
> >>>
> >>> Also need to include new extension in the
>  _supported_extension_aliases.
> >>>
> >>> So to avoid changes in the existing code, I was going to create my own
> >>> plugin (which will be very similar to Ml2Plugin) and use it as
> core_plugin.
> >>>
> >>> Please advise the right solution implementing that.
> >>>
> >>> Regards,
> >>> Nader.
> >>>
> >>>
> >>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen 
> >>> wrote:
> 
>  Hi Nader,
> 
>  Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
>  plugin in another. I'm guessing  you probably wire a driver that ML2
> can use
>  though it's hard to tell from the information you've provided what
> you're
>  trying to do.
> 
>  Best,
> 
>  Aaron
> 
> 
>  On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti <
> nader.laho...@gmail.com>
>  wrote:
> >
> > Hi All,
> >
> > I have a question regarding ML2 plugin in neutron:
> > My understanding is that, 'Ml2Plugin' is the default core_plugin for
> > neutron ML2. We can use either the default plugin or our own plugin
> (i.e.
> > my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it
> as
> > core_plugin.
> >
> > Is my understanding correct?
> >
> >
> > Regards,
> > Nader.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
>  ___
>  OpenStack-dev mailing list
>  OpenStack-dev@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>> 

Re: [openstack-dev] Climate Incubation Application

2014-03-07 Thread Thierry Carrez
Dina Belova wrote:
> I'd like to request Climate project review for incubation. Here is
> official incubation application:
> 
> https://wiki.openstack.org/wiki/Climate/Incubation

After watching this thread a bit, it looks like this is more a
preemptive "where fo I fit" advice request than a formal incubation request.

These are interesting questions, and useful answers to projects. We (the
TC) may need an avenue for projects to request such feedback without
necessarily engaging in a formal incubation request...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Yuriy Taraday
Hello.

On Wed, Mar 5, 2014 at 6:42 PM, Miguel Angel Ajo wrote:

> 2) What alternatives can we think about to improve this situation.
>
>0) already being done: coalescing system calls. But I'm unsure that's
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60
> ~=3 minutes overhead on a 10min operation).
>
>a) Rewriting rules into sudo (to the extent that it's possible), and
> live with that.
>b) How secure is neutron about command injection to that point? How
> much is user input filtered on the API calls?
>c) Even if "b" is ok , I suppose that if the DB gets compromised, that
> could lead to command injection.
>
>d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and live
> with sudo with simple filtering. (we kill the python/rootwrap startup
> overhead).
>

Another option would be to allow rootwrap to run in daemon mode and provide
RPC interface. This way Neutron can spawn rootwrap (with its CPython
startup overhead) once and send new commands to be run later over UNIX
socket.
This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run commands
with root priviledges.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Session suggestions for the Juno Design Summit now open

2014-03-07 Thread Thierry Carrez
Hi everyone,

TL;DR:
The session suggestion website for the Juno Design Summit (which will
happen at the OpenStack Summit in Atlanta) is now open at:
http://summit.openstack.org/

Long version:

The "Juno Design Summit" is a specific event part of the overall
"OpenStack Summit" in Atlanta. It is different from classic tracks in
a number of ways.

* It starts on Tuesday morning and ends on Friday evening.

* There are *no formal presentations or speakers*. The sessions at the
design summit are open discussions between contributors on a specific
development topic for the upcoming development cycle, generally
moderated by the PTL or the person who proposed the session. While it is
possible to prepare a few slides to introduce the current status and
kick-off the discussion, these should never be formal
speaker-to-audience presentations. If that's what you're after, the
presentations in the other tracks of the OpenStack Summit are for you.

* There is no community voting on the content. The Juno Design Summit is
split into multiple topics (one for each official OpenStack Program),
and the elected program PTL will be ultimately responsible for selecting
the content he deems important for the upcoming cycle. If you want to be
PTL in place of the PTL, we'll be holding elections for that in the
coming weeks :)

With all this in mind, please feel free to suggest topics of discussion
for this event. The website to do this is open at:

http://summit.openstack.org/

You'll need to go through Launchpad SSO to log on that site (same auth
we use for review.openstack.org and all our core development
infrastructure). If you're lost, try the Help link at the bottom of the
page. If all else fails, send me an email.

Please take extra care when selecting the topic your suggestion belongs
in. You can see the complete list of topics at:

https://wiki.openstack.org/wiki/Summit/Juno

We have two *new* categories this time around:

"Cross-project workshops"
Those will be used to discuss topics which affect all OpenStack
projects, and therefore increase convergence and collaboration across
program barriers.

"Other projects"
Those will let unofficial, OpenStack-related, open source projects to
have a design discussion within the Design Summit area. We'll limit this
to one session per project to give room to as many projects as possible.

You have until *April 20* to suggest sessions. Proposed session topics
will be reviewed by PTLs afterwards, potentially merged with other
suggestions before being scheduled.

You can also comment on proposed sessions to suggest scheduling
constraints or sessions it could be merged with.

More information about the Juno Design Summit can be found at:
https://wiki.openstack.org/wiki/Summit

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-07 Thread Daniel P. Berrange
On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
> I'd Like to request A FFE for the remaining patches in the Ephemeral
> RBD image support chain
> 
> https://review.openstack.org/#/c/59148/
> https://review.openstack.org/#/c/59149/
> 
> are still open after their dependency
> https://review.openstack.org/#/c/33409/ was merged.
> 
> These should be low risk as:
> 1. We have been testing with this code in place.
> 2. It's nearly all contained within the RBD driver.
> 
> This is needed as it implements an essential functionality that has
> been missing in the RBD driver and this will become the second release
> it's been attempted to be merged into.

Add me as a sponsor.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Daniel P. Berrange
On Thu, Mar 06, 2014 at 03:46:24PM -0600, James Carey wrote:
> Please consider a FFE for i18n Message improvements: 
> BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages
>  
> The base enablement for lazy translation has already been sync'd from 
> oslo.   This patch was to enable lazy translation support in Nova.  It is 
> titled re-enable lazy translation because this was enabled during Havana 
> but was pulled due to issues that have since been resolved.
> 
> In order to enable lazy translation it is necessary to do the 
> following things:
> 
>   (1) Fix a bug in oslo with respect to how keywords are extracted from 
> the format strings when saving replacement text for use when the message 
> translation is done.   This is 
> https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working 
> on a fix for in oslo.  Once that is complete it will need to be sync'd 
> into nova.
> 
>   (2) Remove concatenation (+) of translatable messages.  The current 
> class that is used to hold the translatable message (gettextutils.Message) 
> does not support concatenation.  There were a few cases in Nova where this 
> was done and they are coverted to other means of combining the strings in:
> https://review.openstack.org/#/c/78095 Remove use of concatenation on 
> messages
> 
>   (3) Remove the use of str() on exceptions.  The intent of this is to 
> return the message contained in the exception, but these messages may 
> contain unicode, so str cannot be used on them and gettextutils.Message 
> enforces this.  Thus these need
> to either be removed and allow python formatting to do the right thing, or 
> changed to unicode().  Since unicode() will change to str() in Py3, the 
> forward compatible six.text_type() is used instead.  This is done in: 
> https://review.openstack.org/#/c/78096 Remove use of str() on exceptions

Since we're not supporting Py3 anytime soon, this patch does not seem
important enough to justify for FFE. Anytime we do Py3 portability fixes
I'd also expect there to be some hacking check testing that validate
we won't regress in that in the future too.

>   (4) The addition of the call that enables the use of lazy messages. This 
> is in:
> https://review.openstack.org/#/c/73706 Re-enable lazy translation.
> 
> Lazy translation has been enabled in the other projects so it would be 
> beneficial to be consistent with the other projects with respect to 
> message translation.  I have tested that the changes in (2) and (3) work 
> when lazy translation is not enabled.  Thus if a problem is found, the two 
> line change in (4) could be removed to get to the previous behavior. 
> 
> I've been talking to Matt Riedemann and Dan Berrange about this.  Matt 
> has agreed to be a sponsor.

I'll be a sponsor once there is a clear solution proposed to the bug
in point 1.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-07 Thread Qin Zhao
Hi Joe,
Maybe my example is very rare. However, I think a new type of 'in-place'
snapshot will have other advantages. For instance, the hypervisor can
support to save memory content in snapshot file, so that user can revert
his VM to running state. In this way, the user do not need to start each
application again. Every thing is there. User can continue his work very
easily. If the user spawn and boot a new VM, he will need to take a lot of
time to resume his work. Does that make sense?


On Fri, Mar 7, 2014 at 2:20 PM, Joe Gordon  wrote:

> On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
> > Hi Joe,
> > For example, I used to use a private cloud system, which will calculate
> > charge bi-weekly. and it charging formula looks like "Total_charge =
> > Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
> > Volume_number*C4".  Those Instance/Image/Volume number are the number of
> > those objects that user created within these two weeks. And it also has
> > quota to limit total image size and total volume size. That formula is
> not
> > very exact, but you can see that it regards each of my 'create'
> operation ass
> > a 'ticket', and will charge all those tickets, plus the instance duration
>
> Charging for creating a VM creation is not very cloud like.  Cloud
> instances should be treated as ephemeral and something that you can
> throw away and recreate at any time.  Additionally cloud should charge
> on resources used (instance CPU hour, network load etc), and not API
> calls (at least in any meaningful amount).
>
> > fee. In order to reduce the expense of my department, I am asked not to
> > create instance very frequently, and not to create too many images and
> > volume. The image quota is not very big. And I would never be permitted
> to
> > exceed the quota, since it request additional dollars.
> >
> >
> > On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon 
> wrote:
> >>
> >> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
> >> > Hi Joe,
> >> > If we assume the user is willing to create a new instance, the
> workflow
> >> > you
> >> > are saying is exactly correct. However, what I am assuming is that the
> >> > user
> >> > is NOT willing to create a new instance. If Nova can revert the
> existing
> >> > instance, instead of creating a new one, it will become the
> alternative
> >> > way
> >> > utilized by those users who are not allowed to create a new instance.
> >> > Both paths lead to the target. I think we can not assume all the
> people
> >> > should walk through path one and should not walk through path two.
> Maybe
> >> > creating new instance or adjusting the quota is very easy in your
> point
> >> > of
> >> > view. However, the real use case is often limited by business process.
> >> > So I
> >> > think we may need to consider that some users can not or are not
> allowed
> >> > to
> >> > creating the new instance under specific circumstances.
> >> >
> >>
> >> What sort of circumstances would prevent someone from deleting and
> >> recreating an instance?
> >>
> >> >
> >> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon 
> >> > wrote:
> >> >>
> >> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
> >> >> > Hi Joe, my meaning is that cloud users may not hope to create new
> >> >> > instances
> >> >> > or new images, because those actions may require additional
> approval
> >> >> > and
> >> >> > additional charging. Or, due to instance/image quota limits, they
> can
> >> >> > not do
> >> >> > that. Anyway, from user's perspective, saving and reverting the
> >> >> > existing
> >> >> > instance will be preferred sometimes. Creating a new instance will
> be
> >> >> > another story.
> >> >> >
> >> >>
> >> >> Are you saying some users may not be able to create an instance at
> >> >> all? If so why not just control that via quotas.
> >> >>
> >> >> Assuming the user has the power to rights and quota to create one
> >> >> instance and one snapshot, your proposed idea is only slightly
> >> >> different then the current workflow.
> >> >>
> >> >> Currently one would:
> >> >> 1) Create instance
> >> >> 2) Snapshot instance
> >> >> 3) Use instance / break instance
> >> >> 4) delete instance
> >> >> 5) boot new instance from snapshot
> >> >> 6) goto step 3
> >> >>
> >> >> From what I gather you are saying that instead of 4/5 you want the
> >> >> user to be able to just reboot the instance. I don't think such a
> >> >> subtle change in behavior is worth a whole new API extension.
> >> >>
> >> >> >
> >> >> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon 
> >> >> > wrote:
> >> >> >>
> >> >> >> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao 
> wrote:
> >> >> >> > I think the current snapshot implementation can be a solution
> >> >> >> > sometimes,
> >> >> >> > but
> >> >> >> > it is NOT exact same as user's expectation. For example, a new
> >> >> >> > blueprint
> >> >> >> > is
> >> >> >> > created last week,
> >> >> >> >
> >> >> >> >
> https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
> >> >>

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-07 Thread Eugene Nikanorov
I think mini summit is no worse than the summit itself.
Everyone who wants to participate can join.
In fact what we really need is a certain time span of focused work.
ML, meetings are ok, it's just that dedicated in person meetings (design
sessions) could be more productive.
I'm thinking what if such mini-summit is held in Atlanta 1-2-3 days prior
to the OS summit?
That could save attendees a lot of time/money.

Thanks,
Eugene.



On Fri, Mar 7, 2014 at 9:51 AM, Mark McClain  wrote:

>
> On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
>
> > On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
> >> +1
> >>
> >> I think if we can have it before the Juno summit, we can take
> >> concrete, well thought-out proposals to the community at the summit.
> >
> > Unless something has changed starting at the Hong Kong design summit
> > (which unfortunately I was not able to attend), the design summits have
> > always been a place to gather to *discuss* and *debate* proposed
> > blueprints and design specs. It has never been about a gathering to
> > rubber-stamp proposals that have already been hashed out in private
> > somewhere else.
>
> You are correct that is the goal of the design summit.  While I do think
> it is wise to discuss the next steps with LBaaS at this point in time, I am
> not a proponent of in person mini-design summits.  Many contributors to
> LBaaS are distributed all over the global, and scheduling a mini summit
> with short notice will exclude valuable contributors to the team.  I'd
> prefer to see an open process with discussions on the mailing list and
> specially scheduled IRC meetings to discuss the ideas.
>
> mark
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-07 Thread Édouard Thuleau
+1
I though it must merge as experimental for IceHouse, to let the community
tries it and stabilizes it during the Juno release. And for the Juno
release, we will be able to announce it as stable.

Furthermore, the next work, will be to distribute the l3 stuff at the edge
(compute) (called DVR) but this VRRP work will still needed for that [1].
So if we merge L3 HA VRRP as experimental in I to be stable in J, will
could also propose an experimental DVR solution for J and a stable for K.

[1]
https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit

Regards,
Édouard.


On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain <
sylvain.afch...@enovance.com> wrote:

> Hi all,
>
> I would like to request a FFE for the following patches of the L3 HA VRRP
> BP :
>
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>
> https://review.openstack.org/#/c/64553/
> https://review.openstack.org/#/c/66347/
> https://review.openstack.org/#/c/68142/
> https://review.openstack.org/#/c/70700/
>
> These should be low risk since HA is not enabled by default.
> The server side code has been developed as an extension which minimizes
> risk.
> The agent side code introduces a bit more changes but only to filter
> whether to apply the
> new HA behavior.
>
> I think it's a good idea to have this feature in Icehouse, perhaps even
> marked as experimental,
> especially considering the demand for HA in real world deployments.
>
> Here is a doc to test it :
>
>
> https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
>
> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Alexis Lee
Sean Dague said on Thu, Mar 06, 2014 at 01:05:15PM -0500:
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.

+1 to the idea.

Would it be useful to require a single bp for changes to contended
files, EG the ComputeNode class definition? If the fields to be added
had been agreed up-front, some considerable rebasing could have been
avoided.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-03-07 Thread Chmouel Boudjnah
if peoples like this why don't we have it directly on the reviews?

Chmouel.


On Tue, Mar 4, 2014 at 10:00 PM, Carl Baldwin  wrote:

> Nachi,
>
> Great!  I'd been meaning to do something like this.  I took yours and
> tweaked it a bit to highlight failed Jenkins builds in red and grey
> other Jenkins messages.  Human reviews are left in blue.
>
> javascript:(function(){
> list = document.querySelectorAll('td.GJEA35ODGC');
> for(i in list) {
> title = list[i];
> if(! title.innerHTML) { continue; }
> text = title.nextSibling;
> if (text.innerHTML.search('Build failed') > 0) {
> title.style.color='red'
> } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >=
> 0) {
> title.style.color='#66'
> } else {
> title.style.color='blue'
> }
> }
> })()
>
> Carl
>
> On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
> > Hi folks
> >
> > I wrote an bookmarklet for neutron gerrit review.
> > This bookmarklet make the comment title for 3rd party ci as gray.
> >
> > javascript:(function(){list =
> > document.querySelectorAll('td.GJEA35ODGC'); for(i in
> > list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
> > list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
> > 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
> >
> > enjoy :)
> > Nachi
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-07 Thread Jiří Stránský

Hi,

there's one step in cloud initialization that is performed over SSH -- 
calling "keystone-manage pki_setup". Here's the relevant code in 
keystone-init [1], here's a review for moving the functionality to 
os-cloud-config [2].


The consequence of this is that Tuskar will need passwordless ssh key to 
access overcloud controller. I consider this suboptimal for two reasons:


* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into 
authorized_keys on the deployed machine, which means we can either give 
it Tuskar's public key and allow Tuskar to initialize overcloud, or we 
can give it admin's custom public key and allow admin to ssh into 
overcloud, but not both. (Please correct me if i'm mistaken.) We could 
probably work around this issue by having Tuskar do the user key 
injection as part of os-cloud-config, but it's a bit clumsy.



This goes outside the scope of my current knowledge, i'm hoping someone 
knows the answer: Could pki_setup be run by combining powers of Heat and 
os-config-refresh? (I presume there's some reason why we're not doing 
this already.) I think it would help us a good bit if we could avoid 
having to SSH from Tuskar to overcloud.



Thanks

Jirka


[1] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/init-keystone#L85-L86

[2] https://review.openstack.org/#/c/78148/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I am taking sick leave for today

2014-03-07 Thread Dmitry Mescheryakov
Коллеги,

Сегодня я опять беру выходной по болезни. Начал поправляться но все
еще чувствую себя неуверенно. Надеюсь вылечиться целиком ко вторнику.

Дмитрий
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-07 Thread Zhenguo Niu
+1

On Friday, March 7, 2014, Akihiro Motoki  wrote:

> +1 from me too.
> Having a core member who is familiar with horizon and tuskar-ui code
> base encourages the integration more!
>
> Akihiro
>
> On Fri, Mar 7, 2014 at 4:56 PM, Matthias Runge 
> >
> wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > On Wed, Mar 05, 2014 at 10:36:22PM +, Lyle, David wrote:
> >> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his
> reviews very insightful and more importantly have come to rely on their
> quality. He has contributed to several areas in Horizon and he understands
> the code base well.  Radomir is also very active in tuskar-ui both
> contributing and reviewing.
> >>
> > +1 from me, I fully support this. Radomir has done a impressive job
> > and his reviews and contributions have been good since he started.
> >
> > Matthias
> > - --
> > Matthias Runge >
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v1
> >
> > iQEcBAEBAgAGBQJTGXu6AAoJEOnz8qQwcaIWoesH/0pP7daAV2xqynR4zmkQ5QnU
> > 7xcXKWQES/3C0+4YPF4GROvqyRsRtviOnPSkKDDE7W1+6lrZ3/Zc5axqrN5SWkjf
> > V1lmNzriHgAOqo9CyXT0JtrrzkILcQ9sFyuGYaHg1iEpa8D6oXC2bwOOWkwGXBXZ
> > 0lr74B76z46XxR6iHx9WSja02SvySZshIlnf9bJQZtBMDcf1zQq0tPEPLSvkPfeN
> > fNLVWj2+PCS4Z6ww8/a4D09fnxf31a2ziG0Anl8aiSj6KuQSlG+FOGv1WfcgLySB
> > Pz1MsFvj5J7pF2AYcD2uXGQaWL3DSY6XnFDPcYJ+5KwdjMySJVQiXXG1jTo0wZE=
> > =aUPs
> > -END PGP SIGNATURE-
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Best Regards,
Zhenguo Niu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Sean Dague
On 03/06/2014 04:46 PM, James Carey wrote:
> Please consider a FFE for i18n Message improvements:  
> BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages
>  
> The base enablement for lazy translation has already been sync'd
> from oslo.   This patch was to enable lazy translation support in Nova.
>  It is titled re-enable lazy translation because this was enabled during
> Havana but was pulled due to issues that have since been resolved.
> 
> In order to enable lazy translation it is necessary to do the
> following things:
> 
>   (1) Fix a bug in oslo with respect to how keywords are extracted from
> the format strings when saving replacement text for use when the message
> translation is done.   This is
> https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
> on a fix for in oslo.  Once that is complete it will need to be sync'd
> into nova.
> 
>   (2) Remove concatenation (+) of translatable messages.  The current
> class that is used to hold the translatable message
> (gettextutils.Message) does not support concatenation.  There were a few
> cases in Nova where this was done and they are coverted to other means
> of combining the strings in:
> https://review.openstack.org/#/c/78095Remove use of concatenation on
> messages
> 
>   (3) Remove the use of str() on exceptions.  The intent of this is to
> return the message contained in the exception, but these messages may
> contain unicode, so str cannot be used on them and gettextutils.Message
> enforces this.  Thus these need
> to either be removed and allow python formatting to do the right thing,
> or changed to unicode().  Since unicode() will change to str() in Py3,
> the forward compatible six.text_type() is used instead.  This is done in:  
> https://review.openstack.org/#/c/78096Remove use of str() on exceptions
> 
>   (4) The addition of the call that enables the use of lazy messages.
>  This is in:
> https://review.openstack.org/#/c/73706Re-enable lazy translation.
> 
> Lazy translation has been enabled in the other projects so it would
> be beneficial to be consistent with the other projects with respect to
> message translation.  

Unless it has landed in *every other* integrated project besides Nova, I
don't find this compelling.

I have tested that the changes in (2) and (3) work
> when lazy translation is not enabled.  Thus if a problem is found, the
> two line change in (4) could be removed to get to the previous behavior.
> 
> I've been talking to Matt Riedemann and Dan Berrange about this.
>  Matt has agreed to be a sponsor.

If this is enabled in other projects, where is the Tempest scenario test
that actually demonstrates that this is working on real installs?

I get that everyone has features that didn't hit. BHowever now is not
that time for that, now is the time for people to get focussed on bugs
hunting. And especially if we are talking about *another* oslo sync.

-1

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] I am taking sick leave for today

2014-03-07 Thread Dmitry Mescheryakov
Ooops, sorry, wrong recepient :-)

7 марта 2014 г., 14:13 пользователь Dmitry Mescheryakov
 написал:
> Коллеги,
>
> Сегодня я опять беру выходной по болезни. Начал поправляться но все
> еще чувствую себя неуверенно. Надеюсь вылечиться целиком ко вторнику.
>
> Дмитрий
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Daniel P. Berrange
On Thu, Mar 06, 2014 at 01:05:15PM -0500, Sean Dague wrote:
> One of the issues that the Nova team has definitely hit is Blueprint
> overload. At some point there were over 150 blueprints. Many of them
> were a single sentence.
> 
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.
> 
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like follows:
> 
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
> 
> Basically blueprints would get design review, and we'd be pretty sure we
> liked the approach before the blueprint is approved. This would
> hopefully reduce the late design review in the code reviews that's
> happening a lot now.
> 
> There are plenty of niggly details that would be need to be worked out
> 
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which
> point, we don't review any new items.
> 
> Anyway, plenty of details to be sorted. However we should figure out if
> the big idea has support before we sort out the details on this one.
> 
> Launchpad blueprints will still be used for tracking once things are
> approved, but this will give us a standard way to iterate on that
> content and get to agreement on approach.

As someone who has complained about the awfulness of our blueprint
design docs & process many times, I'd welcome this effort to ensure
that we actually do detailed review. In concert with this I'd suggest
that we setup some kind of bot that would automatically add a -2
to any patch which is submitted where the linked blueprint is not
already approved. This would make it very clear to people who just
submit patches and create a blueprint just as a "tick box" for
process compliance, that they're doing it wrong. It would also make
it clear to reviewers that they shouldn't waste their time on patches
which are not approved.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Feature Freeze and end of the I cycle

2014-03-07 Thread Sergey Lukjanov
I'd like to add that we should concentrate on testing and bug fixing
for the next several weeks. This period will include project renaming
too...
The release-critical bugs will be tracked on the icehouse-rc1 [0]
milestone pages. When all such issues will be fixed the first release
candidate will be released and development (master branch) will be
open for Juno dev.

Final coordinated release is expected on April 17th.

You can find more details about OpenStack devcycle in [1].

[0] https://launchpad.net/savanna/+milestone/icehouse-rc1
[1] https://wiki.openstack.org/wiki/Release_Cycle

Thanks.

On Fri, Mar 7, 2014 at 2:00 AM, Sergey Lukjanov  wrote:
> Hi Savanna folks,
>
> Feature Freeze (FF) for Savanna is now in effect. Feature Freeze
> Exceptions (FFE) allowed and could be approved by me as the PTL. So,
> for now there are several things that we could land till the RC1:
>
> * project rename;
> * unit / integration tests addition;
> * docs addition / improvement;
> * fixes / improvements for Hadoop 2 support in all three plugins -
> Vanilla, HDP and IDH due to the fact that this changes are
> self-contained, it doesn't include any refactoring of code outside of
> the plugins.
>
> Re plans for the end of cycle - we should rename our project till the
> first RC. Here is schedule -
> https://wiki.openstack.org/wiki/Icehouse_Release_Schedule. Due to the
> some potential issues with renaming, we'll probably postpone first RC
> for one week.
>
> P.S. Note for the savanna-core team: please, don't approve changes
> that aren't fits FFE'd features.
> P.P.S. There is an awesome explanation of why and how we're FF -
> http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/.
>
> Thanks.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-07 Thread Chen CH Ji
do a code review and found a function _reclaim_queued_deletes will do the
soft_delete reclaim

if you set reclaim_instance_interval > 0 , then delete will be soft_delete
and it will be reclaimed if it's old enough
by default reclaim_instance_interval is 0, so delete will be hard delete ,
user can trigger a force_delete action should delete the instance right now

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   "zhangyu (AI)" 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date:   03/07/2014 09:09 AM
Subject:Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete  protection



After looking into Nova code base, I found there is surely a soft_delete()
method in the ComputeDriver() class. Furthermore,
Xenapi (and only Xenapi) has implemented this method, which finally applies
a hard_shutdown_vm() operation to the instance to be deleted.
If I understand it correctly, it means the instance is in fact shutdown,
instead of being deleted. Later, the user can decide whether to restore it
or not.

My question is that, when and how is the soft_deleted instance truly
deleted? A user needs to trigger a real delete operation on it explicitly,
doesn't he?

Not for sure why other drivers, especially libvirt, did not implement such
a feature...

Thanks~

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: Thursday, March 06, 2014 8:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection

On 6 March 2014 08:50, zhangyu (AI)  wrote:
> It seems to be an interesting idea. In fact, a China-based public
> IaaS, QingCloud, has provided a similar feature to their virtual
> servers. Within 2 hours after a virtual server is deleted, the server
owner can decide whether or not to cancel this deletion and re-cycle that
"deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any
idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete
miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually
deleted) when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete
operation and find back the volume.
> After the specified time, the volume will be actually deleted by the
system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session suggestions for the Juno Design Summit now open

2014-03-07 Thread Sergey Lukjanov
Hi Thierry,

thanks for opening sessions suggestions for Design Summit. Could you,
please, rename Savanna topic to "Sahara (ex. Savanna)"?

On Fri, Mar 7, 2014 at 1:17 PM, Thierry Carrez  wrote:
> Hi everyone,
>
> TL;DR:
> The session suggestion website for the Juno Design Summit (which will
> happen at the OpenStack Summit in Atlanta) is now open at:
> http://summit.openstack.org/
>
> Long version:
>
> The "Juno Design Summit" is a specific event part of the overall
> "OpenStack Summit" in Atlanta. It is different from classic tracks in
> a number of ways.
>
> * It starts on Tuesday morning and ends on Friday evening.
>
> * There are *no formal presentations or speakers*. The sessions at the
> design summit are open discussions between contributors on a specific
> development topic for the upcoming development cycle, generally
> moderated by the PTL or the person who proposed the session. While it is
> possible to prepare a few slides to introduce the current status and
> kick-off the discussion, these should never be formal
> speaker-to-audience presentations. If that's what you're after, the
> presentations in the other tracks of the OpenStack Summit are for you.
>
> * There is no community voting on the content. The Juno Design Summit is
> split into multiple topics (one for each official OpenStack Program),
> and the elected program PTL will be ultimately responsible for selecting
> the content he deems important for the upcoming cycle. If you want to be
> PTL in place of the PTL, we'll be holding elections for that in the
> coming weeks :)
>
> With all this in mind, please feel free to suggest topics of discussion
> for this event. The website to do this is open at:
>
> http://summit.openstack.org/
>
> You'll need to go through Launchpad SSO to log on that site (same auth
> we use for review.openstack.org and all our core development
> infrastructure). If you're lost, try the Help link at the bottom of the
> page. If all else fails, send me an email.
>
> Please take extra care when selecting the topic your suggestion belongs
> in. You can see the complete list of topics at:
>
> https://wiki.openstack.org/wiki/Summit/Juno
>
> We have two *new* categories this time around:
>
> "Cross-project workshops"
> Those will be used to discuss topics which affect all OpenStack
> projects, and therefore increase convergence and collaboration across
> program barriers.
>
> "Other projects"
> Those will let unofficial, OpenStack-related, open source projects to
> have a design discussion within the Design Summit area. We'll limit this
> to one session per project to give room to as many projects as possible.
>
> You have until *April 20* to suggest sessions. Proposed session topics
> will be reviewed by PTLs afterwards, potentially merged with other
> suggestions before being scheduled.
>
> You can also comment on proposed sessions to suggest scheduling
> constraints or sessions it could be merged with.
>
> More information about the Juno Design Summit can be found at:
> https://wiki.openstack.org/wiki/Summit
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [OSSN] Live migration instructions recommend unsecured libvirt remote access

2014-03-07 Thread Luohao (brian)
Nathan, 

I have another idea of allowing vm migration without need for remote access to 
libvirt daemon between compute servers.

As I know, vm migration data path can be independent with libvirt daemon. For 
example, libvirt supports delegating vm migration to the hypervisor. When a vm 
migration is required, nova can prepare an ephemeral migration service on the 
destination node, and then launch the connection from source node to 
destination node to perform the migration. All these can be done by local 
libvirt calls on different compute nodes.

-Hao
 
-邮件原件-
发件人: Nathan Kinder [mailto:nkin...@redhat.com] 
发送时间: 2014年3月7日 3:36
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [OSSN] Live migration instructions recommend unsecured 
libvirt remote access

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Live migration instructions recommend unsecured libvirt remote access
- ---

### Summary ###
When using the KVM hypervisor with libvirt on OpenStack Compute nodes, live 
migration of instances from one Compute server to another requires that the 
libvirt daemon is configured for remote network connectivity.
The libvirt daemon configuration recommended in the OpenStack Configuration 
Reference manual configures libvirtd to listen for incoming TCP connections on 
all network interfaces without requiring any authentication or using any 
encryption.  This insecure configuration allows for anyone with network access 
to the libvirt daemon TCP port on OpenStack Compute nodes to control the 
hypervisor through the libvirt API.

### Affected Services / Software ###
Nova, Compute, KVM, libvirt, Grizzly, Havana, Icehouse

### Discussion ###
The default configuration of the libvirt daemon is to not allow remote access.  
Live migration of running instances between OpenStack Compute nodes requires 
libvirt daemon remote access between OpenStack Compute nodes.

The libvirt daemon should not be configured to allow unauthenticated remote 
access.  The libvirt daemon  has a choice of 4 secure options for remote access 
over TCP.  These options are:

 - SSH tunnel to libvirtd's UNIX socket
 - libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
 - libvirtd TCP socket, with TLS for encryption and x.509 client
   certificates for authentication
 - libvirtd TCP socket, with TLS for encryption and Kerberos for
   authentication

It is not necessary for the libvirt daemon to listen for remote TCP connections 
on all interfaces.  Remote network connectivity to the libvirt daemon should be 
restricted as much as possible.  Remote access is only needed between the 
OpenStack Compute nodes, so the libvirt daemon only needs to listen for remote 
TCP connections on the interface that is used for this communication.  A 
firewall can be configured to lock down access to the TCP port that the libvirt 
daemon listens on, but this does not sufficiently protect access to the libvirt 
API.  Other processes on a remote OpenStack Compute node might have network 
access, but should not be authorized to remotely control the hypervisor on 
another OpenStack Compute node.

### Recommended Actions ###
If you are using the KVM hypervisor with libvirt on OpenStack Compute nodes, 
you should review your libvirt daemon configuration to ensure that it is not 
allowing unauthenticated remote access.

Remote access to the libvirt daemon via TCP is configured by the "listen_tls", 
"listen_tcp", and "auth_tcp" configuration directives.  By default, these 
directives are all commented out.  This results in remote access via TCP being 
disabled.

If you do not need remote libvirt daemon access, you should ensure that the 
following configuration directives are set as follows in the 
/etc/libvirt/libvirtd.conf configuration file.  Commenting out these directives 
will have the same effect, as these values match the internal
defaults:

-  begin example libvirtd.conf snippet  listen_tls = 1 listen_tcp = 0 
auth_tcp = "sasl"
-  end example libvirtd.conf snippet 

If you need to allow remote access to the libvirt daemon between OpenStack 
Compute nodes for live migration, you should ensure that authentication is 
required.  Additionally, you should consider enabling TLS to allow remote 
connections to be encrypted.

The following libvirt daemon configuration directives will allow for 
unencrypted remote connections that use SASL for authentication:

-  begin example libvirtd.conf snippet  listen_tls = 0 listen_tcp = 1 
auth_tcp = "sasl"
-  end example libvirtd.conf snippet 

If you want to require TLS encrypted remote connections, you will have to 
obtain X.509 certificates and configure the libvirt daemon to use them to use 
TLS.  Details on this configuration are in the libvirt daemon documentation.  
Once the certificates are configured, you should set the following libvirt 
daemon configuration directives:

-  begin example libvirtd.conf snippet  listen_tls = 1 l

Re: [openstack-dev] Session suggestions for the Juno Design Summit now open

2014-03-07 Thread Thierry Carrez
Sergey Lukjanov wrote:
> thanks for opening sessions suggestions for Design Summit. Could you,
> please, rename Savanna topic to "Sahara (ex. Savanna)"?

Done!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [[savanna] Design Summit session suggestions

2014-03-07 Thread Sergey Lukjanov
Hey Sahara (Savanna) folks ,

Design Summit session suggestions now open, so, please, add topics
you'd like to discuss on summit. More info [0].

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029319.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Thierry Carrez
Sean Dague wrote:
> One of the issues that the Nova team has definitely hit is Blueprint
> overload. At some point there were over 150 blueprints. Many of them
> were a single sentence.
> 
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.
> 
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like follows:
> 
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
> 
> Basically blueprints would get design review, and we'd be pretty sure we
> liked the approach before the blueprint is approved. This would
> hopefully reduce the late design review in the code reviews that's
> happening a lot now.
> 
> There are plenty of niggly details that would be need to be worked out
> 
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which
> point, we don't review any new items.
> 
> Anyway, plenty of details to be sorted. However we should figure out if
> the big idea has support before we sort out the details on this one.
> 
> Launchpad blueprints will still be used for tracking once things are
> approved, but this will give us a standard way to iterate on that
> content and get to agreement on approach.

Sounds like an interesting experiment, and a timely one as we figure out
how to do blueprint approval in the future with StoryBoard.

I'm a bit skeptical that can work without enforcing that changes
reference at least a bug or a blueprint, though. People who were too
lazy to create a single-sentence blueprint to cover for their feature
will definitely not go through a Gerrit-powered process, so the
temptation to fly your smallish features below the radar ("not worth
this whole blueprint approval thing") and just get them merged will be
high. I fear it will overall result in work being less tracked, rather
than more tracked.

FWIW we plan to enforce a bug reference / blueprint reference in changes
with StoryBoard, but it comes with autocreation of missing
bugs/blueprints (from the commit message) to lower the developer hassle.

That being said, don't let my skepticism go into the way of your
experimentation. We definitely need to improve in this area. I'd like to
have a cross-project session on feature planning/tracking at the Design
Summit, where we can brainstorm more ideas around this.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-07 Thread victor stinner
Hi,

Yes, a monotonic *must* be used to compute timeouts. I wrote the 
time.monotonic() function of Python 3.3 and the PEP 418. For my Trollius 
project, I wrote this code for Python 2 using ctypes. It works on Linux, 
Windows, Mac OS X, OpenBSD, FreeBSD and Solaris:
https://bitbucket.org/enovance/trollius/src/5e45fc18fce0b34a47481916a274dace7f2964f4/asyncio/time_monotonic.py?at=trollius

There are existing Python modules, examples:
https://github.com/gavinbeatty/python-monotonic-time
https://github.com/ludios/Monoclock

Any other many projecs have their own implementation.

Read also the 418 for the background:
http://legacy.python.org/dev/peps/pep-0418/

FYI On Linux, since the glibc 2.17, clock_gettime() is provided by the libc. On 
older glibc versions, it is provided by librt.

Victor

- Mail original -
> De: "Yuriy Taraday" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Vendredi 7 Mars 2014 08:52:56
> Objet: Re: [openstack-dev] time.sleep is affected by  eventlet.monkey_patch()
> 
> On Fri, Mar 7, 2014 at 11:20 AM, Yuriy Taraday < yorik@gmail.com > wrote:
> 
> 
> 
> All in all it sounds like an eventlet bug. I'm not sure how it can be dealt
> with though.
> 
> Digging into it I found out that eventlet uses time.time() by default that is
> not monotonic. There's no clear way to replace it, but you can workaround
> this:
> 1. Get monotonic clock function here:
> http://stackoverflow.com/a/1205762/238308 (note that for FreeBSD or MacOS
> you'll have to use different constant).
> 2. Make eventlet's hub use it:
> eventlet.hubs._threadlocal.hub =
> eventlet.hubs.get_default_hub().Hub(monotonic_time)
> 
> --
> 
> Kind regards, Yuriy.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Daniel P. Berrange
On Fri, Mar 07, 2014 at 12:30:15PM +0100, Thierry Carrez wrote:
> Sean Dague wrote:
> > One of the issues that the Nova team has definitely hit is Blueprint
> > overload. At some point there were over 150 blueprints. Many of them
> > were a single sentence.
> > 
> > The results of this have been that design review today is typically not
> > happening on Blueprint approval, but is instead happening once the code
> > shows up in the code review. So -1s and -2s on code review are a mix of
> > design and code review. A big part of which is that design was never in
> > any way sufficiently reviewed before the code started.
> > 
> > In today's Nova meeting a new thought occurred. We already have Gerrit
> > which is good for reviewing things. It gives you detailed commenting
> > abilities, voting, and history. Instead of attempting (and usually
> > failing) on doing blueprint review in launchpad (or launchpad + an
> > etherpad, or launchpad + a wiki page) we could do something like follows:
> > 
> > 1. create bad blueprint
> > 2. create gerrit review with detailed proposal on the blueprint
> > 3. iterate in gerrit working towards blueprint approval
> > 4. once approved copy back the approved text into the blueprint (which
> > should now be sufficiently detailed)
> > 
> > Basically blueprints would get design review, and we'd be pretty sure we
> > liked the approach before the blueprint is approved. This would
> > hopefully reduce the late design review in the code reviews that's
> > happening a lot now.
> > 
> > There are plenty of niggly details that would be need to be worked out
> > 
> >  * what's the basic text / template format of the design to be reviewed
> > (probably want a base template for folks to just keep things consistent).
> >  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> > Enhancement Proposals), or is it happening in a separate gerrit tree.
> >  * are there timelines for blueprint approval in a cycle? after which
> > point, we don't review any new items.
> > 
> > Anyway, plenty of details to be sorted. However we should figure out if
> > the big idea has support before we sort out the details on this one.
> > 
> > Launchpad blueprints will still be used for tracking once things are
> > approved, but this will give us a standard way to iterate on that
> > content and get to agreement on approach.
> 
> Sounds like an interesting experiment, and a timely one as we figure out
> how to do blueprint approval in the future with StoryBoard.
> 
> I'm a bit skeptical that can work without enforcing that changes
> reference at least a bug or a blueprint, though. People who were too
> lazy to create a single-sentence blueprint to cover for their feature
> will definitely not go through a Gerrit-powered process, so the
> temptation to fly your smallish features below the radar ("not worth
> this whole blueprint approval thing") and just get them merged will be
> high. I fear it will overall result in work being less tracked, rather
> than more tracked.

It is fairly easy to spot when people submit things which are features,
without a blueprint or bug listed. So as long as reviewers make a god
habit of rejecting such patches I think people will get the message
fairly quickly.

> FWIW we plan to enforce a bug reference / blueprint reference in changes
> with StoryBoard, but it comes with autocreation of missing
> bugs/blueprints (from the commit message) to lower the developer hassle.
> 
> That being said, don't let my skepticism go into the way of your
> experimentation. We definitely need to improve in this area. I'd like to
> have a cross-project session on feature planning/tracking at the Design
> Summit, where we can brainstorm more ideas around this.

If nothing else, trying more formal review of blueprints in gerrit
for a cycle, should teach us more about what we'll want storyboard
to be able todo in this area.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] asymmetric gating and stable vs unstable tests

2014-03-07 Thread Sergey Lukjanov
Hey Deva,

It really could be a huge issue for incubated projects.

I have now good ideas about how to solve it...

One of the bad ideas is that we could extend jenkins success message
to include note that will be visible even if comment isn't expanded.
Or post separated comment like "Please, note that following jobs
marked as stable and, please, treat their failure"...

On Fri, Feb 28, 2014 at 12:52 AM, Devananda van der Veen
 wrote:
> Hi all,
>
> I'd like to point out how asymmetric gating is challenging for incubated
> projects, and propose that there may be a way to make it less so.
>
> For reference, incubated projects aren't allowed to have symmetric gating
> with integrated projects. This is why our devstack and tempest tests are
> "*-check-nv" in devstack and tempest, but "*-check" and "*-gate" in our
> pipeline. So, these jobs are stable from Ironic's point of view because
> we've been gating on them for the last month.
>
> Cut forward to this morning. A devstack patch [1] was merged and broke
> Ironic's gate because of a one-line issue in devstack/lib/ironic which I've
> since proposed a fix for [2]. This issue was visible in the non-voting check
> results before the patch was approved -- but those non-voting checks got
> ignored because of an assumption of instability (they must be non-voting for
> a reason, right?).
>
> I'm not suggesting we gate integrated projects on incubated projects, but I
> would like to point out that not all non-voting checks are non-voting
> *because they're unstable*. It would be great if there were a way to
> indicate that certain tests are voting for someone else and a failure
> actually matters to them.
>
> Thanks for listening,
> -Deva
>
>
> [1] https://review.openstack.org/#/c/71996/
>
> [2] https://review.openstack.org/#/c/76943/  -- It's been approved already,
> just waiting in the merge queue ...
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Sean Dague
On 03/07/2014 06:30 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> One of the issues that the Nova team has definitely hit is Blueprint
>> overload. At some point there were over 150 blueprints. Many of them
>> were a single sentence.
>>
>> The results of this have been that design review today is typically not
>> happening on Blueprint approval, but is instead happening once the code
>> shows up in the code review. So -1s and -2s on code review are a mix of
>> design and code review. A big part of which is that design was never in
>> any way sufficiently reviewed before the code started.
>>
>> In today's Nova meeting a new thought occurred. We already have Gerrit
>> which is good for reviewing things. It gives you detailed commenting
>> abilities, voting, and history. Instead of attempting (and usually
>> failing) on doing blueprint review in launchpad (or launchpad + an
>> etherpad, or launchpad + a wiki page) we could do something like follows:
>>
>> 1. create bad blueprint
>> 2. create gerrit review with detailed proposal on the blueprint
>> 3. iterate in gerrit working towards blueprint approval
>> 4. once approved copy back the approved text into the blueprint (which
>> should now be sufficiently detailed)
>>
>> Basically blueprints would get design review, and we'd be pretty sure we
>> liked the approach before the blueprint is approved. This would
>> hopefully reduce the late design review in the code reviews that's
>> happening a lot now.
>>
>> There are plenty of niggly details that would be need to be worked out
>>
>>  * what's the basic text / template format of the design to be reviewed
>> (probably want a base template for folks to just keep things consistent).
>>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
>> Enhancement Proposals), or is it happening in a separate gerrit tree.
>>  * are there timelines for blueprint approval in a cycle? after which
>> point, we don't review any new items.
>>
>> Anyway, plenty of details to be sorted. However we should figure out if
>> the big idea has support before we sort out the details on this one.
>>
>> Launchpad blueprints will still be used for tracking once things are
>> approved, but this will give us a standard way to iterate on that
>> content and get to agreement on approach.
> 
> Sounds like an interesting experiment, and a timely one as we figure out
> how to do blueprint approval in the future with StoryBoard.
> 
> I'm a bit skeptical that can work without enforcing that changes
> reference at least a bug or a blueprint, though. People who were too
> lazy to create a single-sentence blueprint to cover for their feature
> will definitely not go through a Gerrit-powered process, so the
> temptation to fly your smallish features below the radar ("not worth
> this whole blueprint approval thing") and just get them merged will be
> high. I fear it will overall result in work being less tracked, rather
> than more tracked.
> 
> FWIW we plan to enforce a bug reference / blueprint reference in changes
> with StoryBoard, but it comes with autocreation of missing
> bugs/blueprints (from the commit message) to lower the developer hassle.
> 
> That being said, don't let my skepticism go into the way of your
> experimentation. We definitely need to improve in this area. I'd like to
> have a cross-project session on feature planning/tracking at the Design
> Summit, where we can brainstorm more ideas around this.

Honestly, right now we're not trying to fix all things (or enforce all
things). We're trying to fix a very specific issue that because we are
tool-failing on blueprint approval, as it's entirely impossible to have
a detailed conversation in launchpad, we're failing open with a bunch of
approved and targeted blueprints that no one understands what they are.

I want StoryBoard more than anyone else. However future Puppies and
Unicorns don't fix real problems right now. With the tools already at
our disposal, just using them a different way, I think we can fix some
real problems. I think, more importantly, we're going to discover a
whole new class of problems because we're not blocked on launchpad.

And the fact that the Nova team and the Ops team came up with the same
idea, independently, within a week of each other, is a reasonable
indication that it's worth trying. Because it seriously can't be worse
than the current model. :)

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] FFE Request: monitoring-network-from-opendaylight

2014-03-07 Thread Julien Danjou
On Fri, Mar 07 2014, Yuuichi Fujioka wrote:

Hi,

> We would like to request FFE for monitoring-network-from-opendaylight.[1][2]
> Unfortunately, it was not merged by Icehouse-3.
>
> This is the first driver of the bp/monitoring-network.(It was merged)[3]
> We strongly believe this feature will enhance ceilometer's value.
>
> Because many people are interested in the SDN, and the OpenDaylight is one of 
> the Open Source SDN Controllers.
> Collected information from the OpenDaylight will make something of value.
> E.g. optimization of the resource location, testing route of the virtual
> network and physical network and etc.
>
> And this feature is one of the plugins and doesn't change core logic.
> We feel it is low risk.

Green light from me if our dear release manager is OK with it too.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] FFE Request: monitoring-network-from-opendaylight

2014-03-07 Thread Sean Dague
On 03/07/2014 07:22 AM, Julien Danjou wrote:
> On Fri, Mar 07 2014, Yuuichi Fujioka wrote:
> 
> Hi,
> 
>> We would like to request FFE for monitoring-network-from-opendaylight.[1][2]
>> Unfortunately, it was not merged by Icehouse-3.
>>
>> This is the first driver of the bp/monitoring-network.(It was merged)[3]
>> We strongly believe this feature will enhance ceilometer's value.
>>
>> Because many people are interested in the SDN, and the OpenDaylight is one 
>> of the Open Source SDN Controllers.
>> Collected information from the OpenDaylight will make something of value.
>> E.g. optimization of the resource location, testing route of the virtual
>> network and physical network and etc.
>>
>> And this feature is one of the plugins and doesn't change core logic.
>> We feel it is low risk.
> 
> Green light from me if our dear release manager is OK with it too.



Are the reviews already posted? And are there 2 ceilometer core members
signed up for review (names please)? Are we sure this is mergable by
Tuesday - pre release meeting?

If the answer to all of those are yes, then I approve.

But remember, project meeting on Tuesday is hard deadline. If it's not
merged by then it needs to be defered.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Miguel Angel Ajo


I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.

If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.

And btw, if we add RPC in the middle, it's possible that all those
system call delays increase, or don't decrease all it'll be desirable.


On 03/07/2014 10:06 AM, Yuriy Taraday wrote:

Another option would be to allow rootwrap to run in daemon mode and
provide RPC interface. This way Neutron can spawn rootwrap (with its
CPython startup overhead) once and send new commands to be run later
over UNIX socket.



This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run
commands with root priviledges.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Stephen Gran

Hi,

Given that Yuriy says explicitly 'unix socket', I dont think he means 
'MQ' when he says 'RPC'.  I think he just means a daemon listening on a 
unix socket for execution requests.  This seems like a reasonably 
sensible idea to me.


Cheers,

On 07/03/14 12:52, Miguel Angel Ajo wrote:


I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.

If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.

And btw, if we add RPC in the middle, it's possible that all those
system call delays increase, or don't decrease all it'll be desirable.


On 03/07/2014 10:06 AM, Yuriy Taraday wrote:

Another option would be to allow rootwrap to run in daemon mode and
provide RPC interface. This way Neutron can spawn rootwrap (with its
CPython startup overhead) once and send new commands to be run later
over UNIX socket.



This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run
commands with root priviledges.


--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 57% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 6, 2014, at 9:56 PM, Zhangleiqiang  wrote:

>> get them working. For example, in a devstack VM the only way I can get the
>> iSCSI target to show the new size (after an lvextend) is to delete and 
>> recreate
>> the target, something jgriffiths said he doesn't want to support ;-).
> 
> I know a method can achieve it, but it maybe need the instance to pause first 
> (during the step2 below), but without detaching/reattaching. The steps as 
> follows:
> 
> 1. Extend the LV
> 2.Refresh the size info in tgtd:
>  a) tgtadm --op show --mode target # get the "tid" and "lun_id" properties of 
> target related to the lv; the "size" property in output result is still the 
> old size before lvextend
>  b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
> delete lun mapping in tgtd
>  c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
> --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping

Sure, this is my current workaround, but it's what I thought we *didn't* want 
to have to do.

>  d) tgtadm --op show --mode target #now the "size" property in output result 
> is the new size
> *PS*:  
> a) During the procedure, the corresponding device on the compute node won't 
> disappear. But I am not sure the result if Instance has IO on this volume, so 
> maybe the instance may be paused during this procedure.

Yeah, but pausing the instance isn't an online extend. As soon as the user 
can't interact with their instance, even briefly, it's an offline extend in my 
view.

> b) Maybe we can modify tgtadm, and make it support the operation which is 
> just "refresh" the size of backing store.

Maybe. I'd be interested in any thoughts/patches you have to accomplish this. :)

> 
> 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
> {target_name} -R

Yeah, right now as part of this work I'm adding two extensions to Nova. One to 
issue this rescan on the compute host and another to get the size of the block 
device so Cinder can poll until the device is actually the new size (not an 
ideal solution, but so far I don't have a better one).

> 
>> I also
>> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
>> etc.).
> 
> Till now, we focused on the "volume" which is based on *block device*. Under 
> this scenario, we must first try to "extend" the volume and notify the 
> hypervisor, I think one of the preconditions is to make sure the extend 
> operation will not affect the IO in Instance.
> 
> However, there is another scenario which maybe a little different. For 
> *online-extend" virtual disks (qcow2, sparse, etc) whose backend storage is 
> file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
> is as follows:
> 1. QEMU drain all IO
> 2. *QEMU* extend the virtual disk
> 3. QEMU resume IO
> 
> The difference is the *extend* work need be done by QEMU other than cinder 
> driver. 
> 
>> Feel free to ping me on IRC (pdmars).
> 
> I don't know your time zone, we can continue the discussion on IRC, :)

Good point. :) I'm in the US central time zone.

Paul

> 
> --
> zhangleiqiang
> 
> Best Regards
> 
> 
>> -Original Message-
>> From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
>> Sent: Thursday, March 06, 2014 12:56 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Luohao (brian)
>> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
>> online-extend feature to cinder ?
>> 
>> Hey,
>> 
>> Sorry I missed this thread a couple of days ago. I am working on a 
>> first-pass of
>> this and hope to have something soon. So far I've mostly focused on getting
>> OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
>> with libvirt+kvm+lvm so I'd love some help there if you have ideas about how 
>> to
>> get them working. For example, in a devstack VM the only way I can get the
>> iSCSI target to show the new size (after an lvextend) is to delete and 
>> recreate
>> the target, something jgriffiths said he doesn't want to support ;-). I also
>> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
>> etc.). Feel free to ping me on IRC (pdmars).
>> 
>> Paul
>> 
>> 
>> On Mar 3, 2014, at 8:50 PM, Zhangleiqiang 
>> wrote:
>> 
>>> @john.griffith. Thanks for your information.
>>> 
>>> I have read the BP you mentioned ([1]) and have some rough thoughts about
>> it.
>>> 
>>> As far as I know, the corresponding online-extend command for libvirt is
>> "blockresize", and for Qemu, the implement differs among disk formats.
>>> 
>>> For the regular qcow2/raw disk file, qemu will take charge of the 
>>> drain_all_io
>> and truncate_disk actions, but for raw block device, qemu will only check if 
>> the
>> *Actual* size of the device is larger than current size.
>>> 
>>> I think the former need more consideration, because the extend work is done
>> by libvirt, Nova may need to do this first and then notify Cinder. But i

Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Murray, Paul (HP Cloud Services)
The principle is excellent, I think there are two points/objectives worth 
keeping in mind:

1. We need an effective way to make and record the design decisions
2. We should make the whole development process easier

In my mind the point of the design review part is to agree up front something 
that should not be over-turned (or is hard to over-turn) late in patch review. 
I agree with others that a patch should not be blocked (or should be hard to 
block) because the reviewer disagrees with an agreed design decision. Perhaps 
an author can ask for a -2 or -1 to be removed if they can point out the agreed 
design decision, without having to reopen the debate. 

I also think that blueprints tend to have parts that should be agreed up front, 
like changes to apis, database migrations, or integration points in general. 
They also have parts that don't need to be agreed up front, there is no point 
in a heavyweight process for everything. Some blueprints might not need any of 
this at all. For example, a new plugin for the filter scheduler might no need a 
lot of design review, or at least, adding the design review is unlikely to ease 
the development cycle.

So, we could use the blueprint template to identify things that need to be 
agreed in the design review. These could include anything the proposer wants 
agreed up front and possibly specifics of a defined set of integration points. 
Some blueprints might have nothing to be formally agreed in design review. 
Additionally, sometimes plans change, so it should be possible to return to 
design review. Possibly the notion of a design decision could be broken out 
form a blueprint in the same way as a patch-set? maybe it only makes sense to 
do it as a whole? Certainly design decisions should be made in relation to 
other blueprints and so it should be easy to see that there are two blueprints 
making related design decisions.

The main point is that there should be an identifiable set of design decisions 
that have reviewed and agreed that can also be found.

**The reward for authors in doing this is the author can defend their patch-set 
against late objections to design decisions.**
**The reward for reviewers is they get a way to know what has been agreed in 
relation to a blueprint.**

On another point...
...sometimes I fall foul of writing code using an approach I have seen in the 
code base, only to be told it was decided not to do it that way anymore. 
Sometimes I had no way of knowing that, and exactly what has been decided, when 
it was decided, and who did the deciding has been lost. Clearly the PTL and ML 
do help out here, but it would be helpful if such things were easy to find out. 
These kinds of design decision should be reviewed and recorded.

Again, I think it is excellent that this is being addressed.

Paul.



-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: 07 March 2014 12:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint 
review & approval

On 03/07/2014 06:30 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> One of the issues that the Nova team has definitely hit is Blueprint 
>> overload. At some point there were over 150 blueprints. Many of them 
>> were a single sentence.
>>
>> The results of this have been that design review today is typically 
>> not happening on Blueprint approval, but is instead happening once 
>> the code shows up in the code review. So -1s and -2s on code review 
>> are a mix of design and code review. A big part of which is that 
>> design was never in any way sufficiently reviewed before the code started.
>>
>> In today's Nova meeting a new thought occurred. We already have 
>> Gerrit which is good for reviewing things. It gives you detailed 
>> commenting abilities, voting, and history. Instead of attempting (and 
>> usually
>> failing) on doing blueprint review in launchpad (or launchpad + an 
>> etherpad, or launchpad + a wiki page) we could do something like follows:
>>
>> 1. create bad blueprint
>> 2. create gerrit review with detailed proposal on the blueprint 3. 
>> iterate in gerrit working towards blueprint approval 4. once approved 
>> copy back the approved text into the blueprint (which should now be 
>> sufficiently detailed)
>>
>> Basically blueprints would get design review, and we'd be pretty sure 
>> we liked the approach before the blueprint is approved. This would 
>> hopefully reduce the late design review in the code reviews that's 
>> happening a lot now.
>>
>> There are plenty of niggly details that would be need to be worked 
>> out
>>
>>  * what's the basic text / template format of the design to be 
>> reviewed (probably want a base template for folks to just keep things 
>> consistent).
>>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova 
>> Enhancement Proposals), or is it happening in a separate gerrit tree.
>>  * are there timelines for bluepri

Re: [openstack-dev] stored userdata

2014-03-07 Thread Steven Hardy
On Fri, Mar 07, 2014 at 05:05:46AM +, Hiroyuki Eguchi wrote:
> I'm envisioning a stored userdata feature.
> < https://blueprints.launchpad.net/nova/+spec/stored-userdata >
> 
> Currently, OpenStack allow user to execute script or send configuration file
> when creating a instance by using --user-data /path/to/filename option.
> 
> But,In order to use this option, All users must input userdata every time.
> So we need to store the userdata in database so that users can manage 
> userdata more easily.
> 
> I'm planning to develop these Nova-APIs.
>  - nova userdata-create
>  - nova userdata-update
>  - nova userdata-delete
>  - nova userdata-show
>  - nova userdata-list
> 
> Users can specify a userdata_name managed by Nova DB or /path/to/filename in 
> --user-data option.
> 
>  - nova boot --user-data  ...
> 
> 
> If you have any comments or suggestion, please let me know.
> And please let me know if there's any discussion about this.

Have you considered instead using a heat template to specify the userdata?

Arguably this could provide a simpler and more flexible interface, because
the userdata can be eaily version controlled, for example via git.

https://github.com/openstack/heat-templates/blob/master/hot/F20/WordPress_Native.yaml#L81

Also I'd like to know more about the use-case requiring update of userdata
- my understanding is that it it cannot be updated, which makes sense
  considering the most likely consumer of the userdata is cloud-init, which
is a run-once tool.

Heat works around this by providing access to resource Metadata (this is in
addition to the metadata key/value pairs made available via the nova
metadata API), which can be defined in the template separately from the
user_data and changes can be polled for (e.g via the heat-cfntools cfn-hup
agent, or the tripleo os-collect-config tool).

We use this method to enable configuration changes to an existing instance
(either an OS::Nova::Server or AWS::EC2::Instance heat resource), so it
would be interesting understand if Heat may satisfy your use-case (and if
not, why).

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 7, 2014, at 7:55 AM, Paul Marshall 
 wrote:

> 
> On Mar 6, 2014, at 9:56 PM, Zhangleiqiang  wrote:
> 
>>> get them working. For example, in a devstack VM the only way I can get the
>>> iSCSI target to show the new size (after an lvextend) is to delete and 
>>> recreate
>>> the target, something jgriffiths said he doesn't want to support ;-).
>> 
>> I know a method can achieve it, but it maybe need the instance to pause 
>> first (during the step2 below), but without detaching/reattaching. The steps 
>> as follows:
>> 
>> 1. Extend the LV
>> 2.Refresh the size info in tgtd:
>> a) tgtadm --op show --mode target # get the "tid" and "lun_id" properties of 
>> target related to the lv; the "size" property in output result is still the 
>> old size before lvextend
>> b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
>> delete lun mapping in tgtd
>> c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
>> --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping
> 
> Sure, this is my current workaround, but it's what I thought we *didn't* want 
> to have to do.
> 
>> d) tgtadm --op show --mode target #now the "size" property in output result 
>> is the new size
>> *PS*:  
>> a) During the procedure, the corresponding device on the compute node won't 
>> disappear. But I am not sure the result if Instance has IO on this volume, 
>> so maybe the instance may be paused during this procedure.
> 
> Yeah, but pausing the instance isn't an online extend. As soon as the user 
> can't interact with their instance, even briefly, it's an offline extend in 
> my view.
> 
>> b) Maybe we can modify tgtadm, and make it support the operation which is 
>> just "refresh" the size of backing store.
> 
> Maybe. I'd be interested in any thoughts/patches you have to accomplish this. 
> :)
> 
>> 
>> 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
>> {target_name} -R
> 
> Yeah, right now as part of this work I'm adding two extensions to Nova. One 
> to issue this rescan on the compute host and another to get the size of the 
> block device so Cinder can poll until the device is actually the new size 
> (not an ideal solution, but so far I don't have a better one).

Sorry, I should correct myself here: I'm adding one extension with two calls. 
One to issue the rescan on the compute host and one to get the blockdev size so 
Cinder can wait until it's actually the new size.

> 
>> 
>>> I also
>>> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
>>> etc.).
>> 
>> Till now, we focused on the "volume" which is based on *block device*. Under 
>> this scenario, we must first try to "extend" the volume and notify the 
>> hypervisor, I think one of the preconditions is to make sure the extend 
>> operation will not affect the IO in Instance.
>> 
>> However, there is another scenario which maybe a little different. For 
>> *online-extend" virtual disks (qcow2, sparse, etc) whose backend storage is 
>> file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
>> is as follows:
>> 1. QEMU drain all IO
>> 2. *QEMU* extend the virtual disk
>> 3. QEMU resume IO
>> 
>> The difference is the *extend* work need be done by QEMU other than cinder 
>> driver. 
>> 
>>> Feel free to ping me on IRC (pdmars).
>> 
>> I don't know your time zone, we can continue the discussion on IRC, :)
> 
> Good point. :) I'm in the US central time zone.
> 
> Paul
> 
>> 
>> --
>> zhangleiqiang
>> 
>> Best Regards
>> 
>> 
>>> -Original Message-
>>> From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
>>> Sent: Thursday, March 06, 2014 12:56 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: Luohao (brian)
>>> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
>>> online-extend feature to cinder ?
>>> 
>>> Hey,
>>> 
>>> Sorry I missed this thread a couple of days ago. I am working on a 
>>> first-pass of
>>> this and hope to have something soon. So far I've mostly focused on getting
>>> OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
>>> with libvirt+kvm+lvm so I'd love some help there if you have ideas about 
>>> how to
>>> get them working. For example, in a devstack VM the only way I can get the
>>> iSCSI target to show the new size (after an lvextend) is to delete and 
>>> recreate
>>> the target, something jgriffiths said he doesn't want to support ;-). I also
>>> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
>>> etc.). Feel free to ping me on IRC (pdmars).
>>> 
>>> Paul
>>> 
>>> 
>>> On Mar 3, 2014, at 8:50 PM, Zhangleiqiang 
>>> wrote:
>>> 
 @john.griffith. Thanks for your information.
 
 I have read the BP you mentioned ([1]) and have some rough thoughts about
>>> it.
 
 As far as I know, the corresponding online-extend command for libvirt is
>>> "blockresize", and for Qemu, the implement differs among dis

Re: [openstack-dev] [GSoC 2014] Proposal Template

2014-03-07 Thread Masaru Nomura
>Sai,
>
>There may be more than one person on a topic, so it would make sense
>to have additional questions per person. Yes, link to project idea is
>definitely needed.
>
>-- dims
>
>On Thu, Mar 6, 2014 at 11:41 AM, saikrishna sripada
>krishna1256 at gmail.com 
>> wrote:

>> *Hi Masaru,*
>>
>> *I tried creating the project template following your suggestions.Thats*
>> *really helpful. Only one suggestion:*
>>
>> *Under the project description, We can give the link to actual project idea.*
>> *The remaining details like these can be removed here since this can be*
>> *redundant.*
>>
>> *What is the goal?*
>> *How will you achieve your goal?*
>> *What would be your milestone?*
>> *At which time will you complete a sub-task of your project?*
>>
>> *These details we will be filling any way in the Project template link just*
>> *which will be just below in the page. Please confirm.*
>>
>> *Thanks,*
>> *--sai krishna.*

Hi Sai, dims

Thank you for your replies!

>> *Under the project description, We can give the link to actual project idea.*
>> *The remaining details like these can be removed here since this can be*
>> *redundant.*

I could've got you wrong but I guess you mean 4 exapmles are trivial,
right? Firstly, the point I wanted to make in the section is to give some
examples to students who haven't had experience of GSoC or other sorts of
projects. I'd say these help them understand what they have to provide.
Secondly, in addition to what dims mentioned, some students would like to
propose their own project. If this is the case, it makes sense as they may
not have a link to a project idea page (though I'm quite sure these people
know exactly what to do).

>Yes, link to project idea is definitely needed.

So, can we add one thing to Project Description?

e.g. Link to a project idea page (if any)

I have no objection to this as it can help reviewers access to the page
easily.


Thanks,

Masaru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stored userdata

2014-03-07 Thread Stephen Gran

On 07/03/14 05:05, Hiroyuki Eguchi wrote:

I'm envisioning a stored userdata feature.
< https://blueprints.launchpad.net/nova/+spec/stored-userdata >

Currently, OpenStack allow user to execute script or send configuration file
when creating a instance by using --user-data /path/to/filename option.

But,In order to use this option, All users must input userdata every time.
So we need to store the userdata in database so that users can manage userdata 
more easily.

I'm planning to develop these Nova-APIs.
  - nova userdata-create
  - nova userdata-update
  - nova userdata-delete
  - nova userdata-show
  - nova userdata-list

Users can specify a userdata_name managed by Nova DB or /path/to/filename in 
--user-data option.

  - nova boot --user-data  ...


If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.


In general, I think userdata should be WORM.  It certainly is in every 
other cloud setup I am familiar with.  This is the information fed to 
the instance when it boots the first time - having userdata change over 
time means you lose access to the original when you want to go back and 
retrieve it.


I think this would be a regression, and be unexpected behavior.

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 57% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-07 Thread Robert Kukura


On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
Yes, that sounds good to be able to load extensions from a mechanism 
driver.


But another problem I think we have with ML2 plugin is the list 
extensions supported by default [1].
The extensions should only load by MD and the ML2 plugin should only 
implement the Neutron core API.


Keep in mind that ML2 supports multiple MDs simultaneously, so no single 
MD can really control what set of extensions are active. Drivers need to 
be able to load private extensions that only pertain to that driver, but 
we also need to be able to share common extensions across subsets of 
drivers. Furthermore, the semantics of the extensions need to be correct 
in the face of multiple co-existing drivers, some of which know about 
the extension, and some of which don't. Getting this properly defined 
and implemented seems like a good goal for juno.


-Bob



Any though ?
Édouard.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87




On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki > wrote:


Hi,

I think it is better to continue the discussion here. It is a good
log :-)

Eugine and I talked the related topic to allow drivers to load
extensions)  in Icehouse Summit
but I could not have enough time to work on it during Icehouse.
I am still interested in implementing it and will register a
blueprint on it.

etherpad in icehouse summit has baseline thought on how to achieve it.
https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
I hope it is a good start point of the discussion.

Thanks,
Akihiro

On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti
mailto:nader.laho...@gmail.com>> wrote:
> Hi Kyle,
>
> Just wanted to clarify: Should I continue using this mailing
list to post my
> question/concerns about ML2? Please advise.
>
> Thanks,
> Nader.
>
>
>
> On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery
mailto:mest...@noironetworks.com>>
> wrote:
>>
>> Thanks Edgar, I think this is the appropriate place to continue
this
>> discussion.
>>
>>
>> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana
mailto:emag...@plumgrid.com>> wrote:
>>>
>>> Nader,
>>>
>>> I would encourage you to first discuss the possible extension
with the
>>> ML2 team. Rober and Kyle are leading this effort and they have
a IRC meeting
>>> every week:
>>>
https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>>
>>> Bring your concerns on this meeting and get the right feedback.
>>>
>>> Thanks,
>>>
>>> Edgar
>>>
>>> From: Nader Lahouti mailto:nader.laho...@gmail.com>>
>>> Reply-To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
>>> Date: Thursday, March 6, 2014 12:14 PM
>>> To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [Neutron][ML2]
>>>
>>> Hi Aaron,
>>>
>>> I appreciate your reply.
>>>
>>> Here is some more details on what I'm trying to do:
>>> I need to add new attribute to the network resource using
extensions
>>> (i.e. network config profile) and use it in the mechanism
driver (in the
>>> create_network_precommit/postcommit).
>>> If I use current implementation of Ml2Plugin, when a call is
made to
>>> mechanism driver's create_network_precommit/postcommit the new
attribute is
>>> not included in the 'mech_context'
>>> Here is code from Ml2Plugin:
>>> class Ml2Plugin(...):
>>> ...
>>>def create_network(self, context, network):
>>> net_data = network['network']
>>> ...
>>> with session.begin(subtransactions=True):
>>> self._ensure_default_security_group(context, tenant_id)
>>> result = super(Ml2Plugin,
self).create_network(context,
>>> network)
>>> network_id = result['id']
>>> ...
>>> mech_context = driver_context.NetworkContext(self,
context,
>>> result)
>>> self.mechanism_manager.create_network_precommit(mech_context)
>>>
>>> Also need to include new extension in the
 _supported_extension_aliases.
>>>
>>> So to avoid changes in the existing code, I was going to
create my own
>>> plugin (which will be very similar to Ml2Plugin) and use it as
core_plugin.
>>>
>>> Please advise the right solution implementing that.
>>>
>>> Regards,
>>> Nader.
>>>
>>>
>>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen
mailto:aaronoro...@gmail.com>>
>>> wrote:

 Hi Nader,

 Devstack's default plugin is ML2. Usually you wouldn't
'inherit' one
 plugin in another. I'm guessing  you probably wire a driver
that ML2 can use
 though it's hard to tell from the inf

Re: [openstack-dev] Incubation Request: Murano

2014-03-07 Thread Anne Gentle
On Fri, Mar 7, 2014 at 2:53 AM, Thierry Carrez wrote:

> Steven Dake wrote:
> > I'm a bit confused as well as to how a incubated project would be
> > differentiated from a integrated project in one program.  This may have
> > already been discussed by the TC.  For example, Red Hat doesn't
> > officially support incubated projects, but we officially support (with
> > our full sales/training/documentation/support/ plus a whole bunch of
> > other Red Hat internalisms) Integrated projects.  OpenStack vendors need
> > a way to let customers know (through an upstream page?) what a project
> > in a specific program's status is so we can appropriately set
> > expectations with the community and  customers.
>
> The reference list lives in the governance git repository:
>
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
>
>
A bit of metadata I'd like added to the programs.yaml file is which release
the project was in what status, integrated or incubated. Shall I propose a
patch?

Currently you have to look at italicizations on
https://wiki.openstack.org/wiki/Programs to determine integrated/incubated
but you can't really know what release.

Anne


> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-07 Thread Imre Farkas

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling "keystone-manage pki_setup". Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].

The consequence of this is that Tuskar will need passwordless ssh key to
access overcloud controller. I consider this suboptimal for two reasons:

* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into
authorized_keys on the deployed machine, which means we can either give
it Tuskar's public key and allow Tuskar to initialize overcloud, or we
can give it admin's custom public key and allow admin to ssh into
overcloud, but not both. (Please correct me if i'm mistaken.) We could
probably work around this issue by having Tuskar do the user key
injection as part of os-cloud-config, but it's a bit clumsy.


This goes outside the scope of my current knowledge, i'm hoping someone
knows the answer: Could pki_setup be run by combining powers of Heat and
os-config-refresh? (I presume there's some reason why we're not doing
this already.) I think it would help us a good bit if we could avoid
having to SSH from Tuskar to overcloud.


Yeah, it came up a couple times on the list. The current solution is 
because if you have an HA setup, the nodes can't decide on its own, 
which one should run pki_setup.
Robert described this topic and why it needs to be initialized 
externally during a weekly meeting in last December. Check the topic 
'After heat stack-create init operations (lsmola)': 
http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html


Imre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-07 Thread Uma Goring
+1 to the idea of having the mini-summit in Atlanta a few days prior to the OS 
summit.



On Mar 7, 2014, at 3:37 AM, "Eugene Nikanorov" 
mailto:enikano...@mirantis.com>> wrote:

I think mini summit is no worse than the summit itself.
Everyone who wants to participate can join.
In fact what we really need is a certain time span of focused work.
ML, meetings are ok, it's just that dedicated in person meetings (design 
sessions) could be more productive.
I'm thinking what if such mini-summit is held in Atlanta 1-2-3 days prior to 
the OS summit?
That could save attendees a lot of time/money.

Thanks,
Eugene.



On Fri, Mar 7, 2014 at 9:51 AM, Mark McClain 
mailto:mmccl...@yahoo-inc.com>> wrote:

On Mar 6, 2014, at 4:31 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>> +1
>>
>> I think if we can have it before the Juno summit, we can take
>> concrete, well thought-out proposals to the community at the summit.
>
> Unless something has changed starting at the Hong Kong design summit
> (which unfortunately I was not able to attend), the design summits have
> always been a place to gather to *discuss* and *debate* proposed
> blueprints and design specs. It has never been about a gathering to
> rubber-stamp proposals that have already been hashed out in private
> somewhere else.

You are correct that is the goal of the design summit.  While I do think it is 
wise to discuss the next steps with LBaaS at this point in time, I am not a 
proponent of in person mini-design summits.  Many contributors to LBaaS are 
distributed all over the global, and scheduling a mini summit with short notice 
will exclude valuable contributors to the team.  I’d prefer to see an open 
process with discussions on the mailing list and specially scheduled IRC 
meetings to discuss the ideas.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] FFE Request: monitoring-network-from-opendaylight

2014-03-07 Thread Julien Danjou
On Fri, Mar 07 2014, Sean Dague wrote:

> Are the reviews already posted?

Yes.

> And are there 2 ceilometer core members signed up for review (names
> please)?

I can be one, and I'm sure Lianhao Lu and Gordon Chung will continue to
look at it.

> Are we sure this is mergable by Tuesday - pre release meeting?

Yep.

> If the answer to all of those are yes, then I approve.

Thanks!

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] API weekly meeting

2014-03-07 Thread Andrew Laski

On 03/07/14 at 11:15am, Christopher Yeoh wrote:

Hi,

I'd like to start a weekly IRC meeting for those interested in
discussing Nova API issues. I think it would be a useful forum for:

- People to keep up with what work is going on the API and where its
 headed.
- Cloud providers, SDK maintainers and users of the REST API to provide
 feedback about the API and what they want out of it.
- Help coordinate the development work on the API (both v2 and v3)

If you're interested in attending please respond and include what time
zone you're in so we can work out the best time to meet.


I'm interested as someone looking to make large changes to the v3 API 
(tasks).  I'm in EST/EDT.




Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-07 Thread Stephen Wong
+1 - that is a good idea! Having it several days before the J-Summit in
Atlanta would be great.

- Stephen


On Fri, Mar 7, 2014 at 1:33 AM, Eugene Nikanorov wrote:

> I think mini summit is no worse than the summit itself.
> Everyone who wants to participate can join.
> In fact what we really need is a certain time span of focused work.
> ML, meetings are ok, it's just that dedicated in person meetings (design
> sessions) could be more productive.
> I'm thinking what if such mini-summit is held in Atlanta 1-2-3 days prior
> to the OS summit?
> That could save attendees a lot of time/money.
>
> Thanks,
> Eugene.
>
>
>
> On Fri, Mar 7, 2014 at 9:51 AM, Mark McClain wrote:
>
>>
>> On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
>>
>> > On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>> >> +1
>> >>
>> >> I think if we can have it before the Juno summit, we can take
>> >> concrete, well thought-out proposals to the community at the summit.
>> >
>> > Unless something has changed starting at the Hong Kong design summit
>> > (which unfortunately I was not able to attend), the design summits have
>> > always been a place to gather to *discuss* and *debate* proposed
>> > blueprints and design specs. It has never been about a gathering to
>> > rubber-stamp proposals that have already been hashed out in private
>> > somewhere else.
>>
>> You are correct that is the goal of the design summit.  While I do think
>> it is wise to discuss the next steps with LBaaS at this point in time, I am
>> not a proponent of in person mini-design summits.  Many contributors to
>> LBaaS are distributed all over the global, and scheduling a mini summit
>> with short notice will exclude valuable contributors to the team.  I'd
>> prefer to see an open process with discussions on the mailing list and
>> specially scheduled IRC meetings to discuss the ideas.
>>
>> mark
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Nathanael Burton
On Mar 6, 2014 1:11 PM, "Sean Dague"  wrote:
>
> One of the issues that the Nova team has definitely hit is Blueprint
> overload. At some point there were over 150 blueprints. Many of them
> were a single sentence.
>
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.
>
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like follows:
>
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
>
> Basically blueprints would get design review, and we'd be pretty sure we
> liked the approach before the blueprint is approved. This would
> hopefully reduce the late design review in the code reviews that's
> happening a lot now.
>
> There are plenty of niggly details that would be need to be worked out
>
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which
> point, we don't review any new items.
>
> Anyway, plenty of details to be sorted. However we should figure out if
> the big idea has support before we sort out the details on this one.
>
> Launchpad blueprints will still be used for tracking once things are
> approved, but this will give us a standard way to iterate on that
> content and get to agreement on approach.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I'm very much in favor of this idea. One of the topics we discussed at the
OpenStack Operator day this week was just this problem. The lack of
alerts/notification of new blueprint creation and the lack of insight into
design architecture, feedback, and approval of them. There are many cases
where Operators would like to provide feedback to the design process of new
features early on to ensure upgrade compatibility, scale, security, HA, and
other issues of concern to Operators.  This would be a great way to get
them more involved in the process.

Nate
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Matt Riedemann



On 3/7/2014 3:23 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 03:46:24PM -0600, James Carey wrote:

 Please consider a FFE for i18n Message improvements:
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages

 The base enablement for lazy translation has already been sync'd from
oslo.   This patch was to enable lazy translation support in Nova.  It is
titled re-enable lazy translation because this was enabled during Havana
but was pulled due to issues that have since been resolved.

 In order to enable lazy translation it is necessary to do the
following things:

   (1) Fix a bug in oslo with respect to how keywords are extracted from
the format strings when saving replacement text for use when the message
translation is done.   This is
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
on a fix for in oslo.  Once that is complete it will need to be sync'd
into nova.

   (2) Remove concatenation (+) of translatable messages.  The current
class that is used to hold the translatable message (gettextutils.Message)
does not support concatenation.  There were a few cases in Nova where this
was done and they are coverted to other means of combining the strings in:
https://review.openstack.org/#/c/78095 Remove use of concatenation on
messages

   (3) Remove the use of str() on exceptions.  The intent of this is to
return the message contained in the exception, but these messages may
contain unicode, so str cannot be used on them and gettextutils.Message
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing, or
changed to unicode().  Since unicode() will change to str() in Py3, the
forward compatible six.text_type() is used instead.  This is done in:
https://review.openstack.org/#/c/78096 Remove use of str() on exceptions


Since we're not supporting Py3 anytime soon, this patch does not seem
important enough to justify for FFE. Anytime we do Py3 portability fixes
I'd also expect there to be some hacking check testing that validate
we won't regress in that in the future too.


   (4) The addition of the call that enables the use of lazy messages. This
is in:
https://review.openstack.org/#/c/73706 Re-enable lazy translation.

 Lazy translation has been enabled in the other projects so it would be
beneficial to be consistent with the other projects with respect to
message translation.  I have tested that the changes in (2) and (3) work
when lazy translation is not enabled.  Thus if a problem is found, the two
line change in (4) could be removed to get to the previous behavior.

 I've been talking to Matt Riedemann and Dan Berrange about this.  Matt
has agreed to be a sponsor.


I'll be a sponsor once there is a clear solution proposed to the bug
in point 1.


Regards,
Daniel



As far as I understand this is not a py3 issue, six.text_type was being 
used because the alternative was converting usage of str() to unicode() 
and that doesn't work with py3, so six.text_type is used.


I agree that if all instances of str() in Nova need to be changed to 
six.text_type for this to work we need a hacking check for it.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-03-07 Thread Dina Belova
Hello, OS folks!

Thanks everyone for taking part in our weekly meeting :)

Meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-07-15.01.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-07-15.01.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-07-15.01.log.html

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Matt Riedemann



On 3/7/2014 4:15 AM, Sean Dague wrote:

On 03/06/2014 04:46 PM, James Carey wrote:

 Please consider a FFE for i18n Message improvements:
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages

 The base enablement for lazy translation has already been sync'd
from oslo.   This patch was to enable lazy translation support in Nova.
  It is titled re-enable lazy translation because this was enabled during
Havana but was pulled due to issues that have since been resolved.

 In order to enable lazy translation it is necessary to do the
following things:

   (1) Fix a bug in oslo with respect to how keywords are extracted from
the format strings when saving replacement text for use when the message
translation is done.   This is
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
on a fix for in oslo.  Once that is complete it will need to be sync'd
into nova.

   (2) Remove concatenation (+) of translatable messages.  The current
class that is used to hold the translatable message
(gettextutils.Message) does not support concatenation.  There were a few
cases in Nova where this was done and they are coverted to other means
of combining the strings in:
https://review.openstack.org/#/c/78095Remove use of concatenation on
messages

   (3) Remove the use of str() on exceptions.  The intent of this is to
return the message contained in the exception, but these messages may
contain unicode, so str cannot be used on them and gettextutils.Message
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing,
or changed to unicode().  Since unicode() will change to str() in Py3,
the forward compatible six.text_type() is used instead.  This is done in:
https://review.openstack.org/#/c/78096Remove use of str() on exceptions

   (4) The addition of the call that enables the use of lazy messages.
  This is in:
https://review.openstack.org/#/c/73706Re-enable lazy translation.

 Lazy translation has been enabled in the other projects so it would
be beneficial to be consistent with the other projects with respect to
message translation.


Unless it has landed in *every other* integrated project besides Nova, I
don't find this compelling.

I have tested that the changes in (2) and (3) work

when lazy translation is not enabled.  Thus if a problem is found, the
two line change in (4) could be removed to get to the previous behavior.

 I've been talking to Matt Riedemann and Dan Berrange about this.
  Matt has agreed to be a sponsor.


If this is enabled in other projects, where is the Tempest scenario test
that actually demonstrates that this is working on real installs?

I get that everyone has features that didn't hit. BHowever now is not
that time for that, now is the time for people to get focussed on bugs
hunting. And especially if we are talking about *another* oslo sync.

-1

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The Tempest requirement just came up yesterday.  FWIW, this i18n stuff 
has been working it's way in since Grizzly, and the new requirement for 
Tempest is new.  I'm not saying it's not valid, but the timing sucks - 
but that's life.


Also, the oslo sync would be to one module, gettextutils, which I don't 
think pulls in anything else from oslo.


Anyway, this is in Keystone, Glance, Cinder, Neutron and Ceilometer at 
least.  Patches are working their way through Heat as I understand it.


I'm not trying to turn this into a crusade, just trying to get out what 
I know about the current state of things.  I'll let Jim Carey or Jay 
Bryant discuss it more since they've been more involved in the 
blueprints across all the projects.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Steve Gordon
- Original Message -
> On Thu, Mar 6, 2014 at 12:05 PM, Sean Dague  wrote:

> 
> I think this is really worthwhile to try -- and it might offer an
> interesting, readable history of decisions made. Plus how funny it was also
> brought up at the Ops Summit. Convergence, cool.
> 
> It also goes along with our hope to move API design docs into the repo.
> 
> Other projects up to try it? The only possible addition is that we might
> need to work out is cross-project blueprints and which repo should those
> live in? We're doing well on integration, be careful about siloing.
> 
> Anne

TBH tracking cross-project blueprint impact is a problem *today*, typically we 
end up with either only one of the involved projects having a blueprint for the 
feature or all of them having one (if you are lucky they might at least link to 
the same design on the wiki but often not ;)). I am hoping that is something 
that can ultimately be addressed in storyboard but am unsure of how we would 
resolve that as part of this proposal, unless instead you had a central 
blueprint repository and used a tag in the review to indicate which projects 
are impacted/involved?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-07 Thread Robert Li (baoli)
Hi Akihiro,

In the case of IPv6 RA, its source IP is a Link Local Address from the
router's RA advertising interface. This LLA address is automatically
generated and not saved in the neutron port DB. We are exploring the idea
of retrieving this LLA if a native openstack RA service is running on the
subnet.

Would SG be needed with a provider net in which the RA service is running
external to openstack?

In the case of IPv4 DHCP, the dhcp port is created by the dhcp service,
and the dhcp server ip address is retrieved from this dhcp port. If the
dhcp server is running outside of openstack, and if we'd only allow dhcp
packets from this server, how is it done now?

thanks,
Robert

On 3/7/14 12:00 AM, "Akihiro Motoki"  wrote:

>I wonder why RA needs to be exposed by security group API.
>Does a user need to configure security group to allow IPv6 RA? or
>should it be allowed in infra side?
>
>In the current implementation DHCP packets are allowed by provider
>rule (which is hardcoded in neutron code now).
>I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
>don't need to expose RA in security group API.
>Am I missing something?
>
>Thanks,
>Akihiro
>
>On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng  wrote:
>> I created a new blueprint [1] which is triggered by the requirement to
>>allow
>> IPv6 Router Advertisement security group rule on compute node in my
>>on-going
>> code review [2].
>>
>> Currently, only security group rule direction, protocol, ethertype and
>>port
>> range are supported by neutron security group rule data structure. To
>>allow
>> Router Advertisement coming from network node or provider network to VM
>>on
>> compute node, we need to specify ICMP type to only allow RA from known
>>hosts
>> (network node dnsmasq binded IP or known provider gateway).
>>
>> To implement this and make the implementation extensible, maybe we can
>>add
>> an additional table name "SecurityGroupRuleData" with Key, Value and ID
>>in
>> it. For ICMP type RA filter, we can add key="icmp-type" value="134", and
>> security group rule to the table. When other ICMP type filters are
>>needed,
>> similar records can be stored. This table can also be used for other
>> firewall rule key values.
>> API change is also needed.
>>
>> Please let me know your comments about this blueprint.
>>
>> [1]
>> 
>>https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type-f
>>ilter
>> [2] https://review.openstack.org/#/c/72252/
>>
>> Thank you!
>> Xuhan Peng
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Sandy Walsh
Yep, great idea. Do it.

On 03/07/2014 02:53 AM, Chris Behrens wrote:
> 
> On Mar 6, 2014, at 11:09 AM, Russell Bryant  wrote:
> […]
>> I think a dedicated git repo for this makes sense.
>> openstack/nova-blueprints or something, or openstack/nova-proposals if
>> we want to be a bit less tied to launchpad terminology.
> 
> +1 to this whole idea.. and we definitely should have a dedicated repo for 
> this. I’m indifferent to its name. :)  Either one of those works for me.
> 
> - Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Jay S Bryant
From:   Sean Dague 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   03/07/2014 04:25 AM
Subject:Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message 
improvements



On 03/06/2014 04:46 PM, James Carey wrote:
> Please consider a FFE for i18n Message improvements: 
> BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages
> 
> The base enablement for lazy translation has already been sync'd
> from oslo.   This patch was to enable lazy translation support in Nova.
>  It is titled re-enable lazy translation because this was enabled during
> Havana but was pulled due to issues that have since been resolved.
> 
> In order to enable lazy translation it is necessary to do the
> following things:
> 
>   (1) Fix a bug in oslo with respect to how keywords are extracted from
> the format strings when saving replacement text for use when the message
> translation is done.   This is
> https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
> on a fix for in oslo.  Once that is complete it will need to be sync'd
> into nova.
> 
>   (2) Remove concatenation (+) of translatable messages.  The current
> class that is used to hold the translatable message
> (gettextutils.Message) does not support concatenation.  There were a few
> cases in Nova where this was done and they are coverted to other means
> of combining the strings in:
> https://review.openstack.org/#/c/78095Remove use of concatenation on
> messages
> 
>   (3) Remove the use of str() on exceptions.  The intent of this is to
> return the message contained in the exception, but these messages may
> contain unicode, so str cannot be used on them and gettextutils.Message
> enforces this.  Thus these need
> to either be removed and allow python formatting to do the right thing,
> or changed to unicode().  Since unicode() will change to str() in Py3,
> the forward compatible six.text_type() is used instead.  This is done 
in: 
> https://review.openstack.org/#/c/78096Remove use of str() on exceptions
> 
>   (4) The addition of the call that enables the use of lazy messages.
>  This is in:
> https://review.openstack.org/#/c/73706Re-enable lazy translation.
> 
> Lazy translation has been enabled in the other projects so it would
> be beneficial to be consistent with the other projects with respect to
> message translation. 

Unless it has landed in *every other* integrated project besides Nova, I
don't find this compelling.

I have tested that the changes in (2) and (3) work
> when lazy translation is not enabled.  Thus if a problem is found, the
> two line change in (4) could be removed to get to the previous behavior.
> 
> I've been talking to Matt Riedemann and Dan Berrange about this.
>  Matt has agreed to be a sponsor.

If this is enabled in other projects, where is the Tempest scenario test
that actually demonstrates that this is working on real installs?

I get that everyone has features that didn't hit. BHowever now is not
that time for that, now is the time for people to get focussed on bugs
hunting. And especially if we are talking about *another* oslo sync.

-1

 -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

[attachment "signature.asc" deleted by Jay S Bryant/Rochester/IBM] 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Sean,

I really feel like this is being made into a bigger deal than it needs to 
be.

The only other project that hasn't accepted this change is Heat and we are 
currently working to resolve that issue.

We were not aware of a requirement for Tempest tests for a change like 
this.  We would be willing to work on getting those added to resolve that 
issue.
If we were to get these added ASAP would you be willing to reconsider?

As far as the Oslo sync is concerned, it is a sync gettextutils which is 
an isolated module that doesn't have other dependencies and is pulling in 
a number of good bug fixes.
The removal of str() from LOGs and exceptions was a ticking time bomb that 
needed to be addressed anyway.

The change brings good value for users and fixes issues that have been 
lying around for quite some time.  We apologize that this is coming in so 
late in the game.
There were extraneous challenges that lead to the timing on this.

Thank you for your consideration on this issue.

Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Doug Hellmann
On Fri, Mar 7, 2014 at 10:47 AM, Steve Gordon  wrote:

> - Original Message -
> > On Thu, Mar 6, 2014 at 12:05 PM, Sean Dague  wrote:
>
> >
> > I think this is really worthwhile to try -- and it might offer an
> > interesting, readable history of decisions made. Plus how funny it was
> also
> > brought up at the Ops Summit. Convergence, cool.
> >
> > It also goes along with our hope to move API design docs into the repo.
> >
> > Other projects up to try it? The only possible addition is that we might
> > need to work out is cross-project blueprints and which repo should those
> > live in? We're doing well on integration, be careful about siloing.
> >
> > Anne
>
> TBH tracking cross-project blueprint impact is a problem *today*,
> typically we end up with either only one of the involved projects having a
> blueprint for the feature or all of them having one (if you are lucky they
> might at least link to the same design on the wiki but often not ;)). I am
> hoping that is something that can ultimately be addressed in storyboard but
> am unsure of how we would resolve that as part of this proposal, unless
> instead you had a central blueprint repository and used a tag in the review
> to indicate which projects are impacted/involved?
>

A central repository does have a certain appeal, especially from my
perspective in Oslo where the work that we do will have an increasing
impact on the projects that consume the libraries. It makes review
permissions on the designs a little tricky, but I think we can work that
out with agreements rather than having to enforce it in gerrit.

Doug



>
> Thanks,
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-07 Thread Russell Bryant
On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
> On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>> RBD image support chain
>>
>> https://review.openstack.org/#/c/59148/
>> https://review.openstack.org/#/c/59149/
>>
>> are still open after their dependency
>> https://review.openstack.org/#/c/33409/ was merged.
>>
>> These should be low risk as:
>> 1. We have been testing with this code in place.
>> 2. It's nearly all contained within the RBD driver.
>>
>> This is needed as it implements an essential functionality that has
>> been missing in the RBD driver and this will become the second release
>> it's been attempted to be merged into.
> 
> Add me as a sponsor.

OK, great.  That's two.

We have a hard deadline of Tuesday to get these FFEs merged (regardless
of gate status).

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Ben Nemec

On 2014-03-06 18:16, Christopher Yeoh wrote:

On Thu, 06 Mar 2014 13:05:15 -0500
Sean Dague  wrote:

In today's Nova meeting a new thought occurred. We already have Gerrit
which is good for reviewing things. It gives you detailed commenting
abilities, voting, and history. Instead of attempting (and usually
failing) on doing blueprint review in launchpad (or launchpad + an
etherpad, or launchpad + a wiki page) we could do something like
follows:

1. create bad blueprint
2. create gerrit review with detailed proposal on the blueprint
3. iterate in gerrit working towards blueprint approval
4. once approved copy back the approved text into the blueprint (which
should now be sufficiently detailed)



+1. I think this could really help avoid wasted work for API related
changes in particular.

Just wondering if we need step 4 - or if the blueprint text should
always just link to either the unapproved patch for the text in
gerrit, or the text in repository once it's approved. Updates to
proposal would be proposed through the same process.

Chris


It makes sense to me to have whatever was approved in Gerrit be the 
canonical version.  Copy-pasting back to launchpad seems error prone.  
IIRC, blueprints have a field for a link to the spec, so maybe we should 
just link to the Gerrit content with that?


It would be nice to have a bot that can update bp status and such when a 
change is approved in Gerrit, but that's something that can happen in 
the future.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Mark McClain

On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo  wrote:

> 
> Yes, one option could be to coalesce all calls that go into
> a namespace into a shell script and run this in the
> ootwrap > ip netns exec
> 
> But we might find a mechanism to determine if some of the steps failed, and 
> what was the result / output, something like failing line + result code. I'm 
> not sure if we rely on stdout/stderr results at any time.
> 

This is exactly one of the items Carl Baldwin has been investigating.  Have you 
checked out his early work? [1]

mark

[1] https://review.openstack.org/#/c/67490/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Vladimir Kozhukalov
As far as I understand, there are 4 projects which are connected with this
topic. Another two projects which were not mentioned by Devananda are
https://github.com/rackerlabs/teeth-rest
https://github.com/rackerlabs/teeth-overlord

Vladimir Kozhukalov


On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen <
devananda@gmail.com> wrote:

> All,
>
> The Ironic team has been discussing the need for a "deploy agent" since
> well before the last summit -- we even laid out a few blueprints along
> those lines. That work was deferred  and we have been using the same deploy
> ramdisk that nova-baremetal used, and we will continue to use that ramdisk
> for the PXE driver in the Icehouse release.
>
> That being the case, at the sprint this week, a team from Rackspace shared
> work they have been doing to create a more featureful hardware agent and an
> Ironic driver which utilizes that agent. Early drafts of that work can be
> found here:
>
> https://github.com/rackerlabs/teeth-agent
> https://github.com/rackerlabs/ironic-teeth-driver
>
> I've updated the original blueprint and assigned it to Josh. For reference:
>
> https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk
>
> I believe this agent falls within the scope of the baremetal provisioning
> program, and welcome their contributions and collaboration on this. To that
> effect, I have suggested that the code be moved to a new OpenStack project
> named "openstack/ironic-python-agent". This would follow an independent
> release cycle, and reuse some components of tripleo (os-*-config). To keep
> the collaborative momentup up, I would like this work to be done now (after
> all, it's not part of the Ironic repo or release). The new driver which
> will interface with that agent will need to stay on github -- or in a
> gerrit feature branch -- until Juno opens, at which point it should be
> proposed to Ironic.
>
> The agent architecture we discussed is roughly:
> - a pluggable JSON transport layer by which the Ironic driver will pass
> information to the ramdisk. Their initial implementation is a REST API.
> - a collection of hardware-specific utilities (python modules, bash
> scripts, what ever) which take JSON as input and perform specific actions
> (whether gathering data about the hardware or applying changes to it).
> - and an agent which routes the incoming JSON to the appropriate utility,
> and routes the response back via the transport layer.
>
>
> -Devananda
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-07 Thread Chmouel Boudjnah
On Fri, Mar 7, 2014 at 12:30 AM, Matt Riedemann
wrote:

> What would be awesome in Juno is some CI around RBD/Ceph.  I'd feel a lot
> more comfortable with this code if we had CI running Tempest



Seb has been working to add ceph support into devstack which could be a
start, https://review.openstack.org/#/c/65113/ (which remind me that i need
to review it :)

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-07 Thread Carl Baldwin
+1

On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau  wrote:
> +1
> I though it must merge as experimental for IceHouse, to let the community
> tries it and stabilizes it during the Juno release. And for the Juno
> release, we will be able to announce it as stable.
>
> Furthermore, the next work, will be to distribute the l3 stuff at the edge
> (compute) (called DVR) but this VRRP work will still needed for that [1].
> So if we merge L3 HA VRRP as experimental in I to be stable in J, will could
> also propose an experimental DVR solution for J and a stable for K.
>
> [1]
> https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit
>
> Regards,
> Édouard.
>
>
> On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain
>  wrote:
>>
>> Hi all,
>>
>> I would like to request a FFE for the following patches of the L3 HA VRRP
>> BP :
>>
>> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>>
>> https://review.openstack.org/#/c/64553/
>> https://review.openstack.org/#/c/66347/
>> https://review.openstack.org/#/c/68142/
>> https://review.openstack.org/#/c/70700/
>>
>> These should be low risk since HA is not enabled by default.
>> The server side code has been developed as an extension which minimizes
>> risk.
>> The agent side code introduces a bit more changes but only to filter
>> whether to apply the
>> new HA behavior.
>>
>> I think it's a good idea to have this feature in Icehouse, perhaps even
>> marked as experimental,
>> especially considering the demand for HA in real world deployments.
>>
>> Here is a doc to test it :
>>
>>
>> https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
>>
>> -Sylvain
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes March 6

2014-03-07 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-03-06-18.02.html
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-03-06-18.02.log.html

It was decided to rename project to Sahara due to the potential
trademark issues.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] Locking: A can of worms

2014-03-07 Thread Matthew Booth
We need locking in the VMware driver. There are 2 questions:

1. How much locking do we need?
2. Do we need single-node or multi-node locking?

I believe these are quite separate issues, so I'm going to try not to
confuse them. I'm going to deal with the first question first.

In reviewing the image cache ageing patch, I came across a race
condition between cache ageing and spawn(). One example of the race is:

Cache Ageingspawn()
* Check timestamps
* Delete timestamp
* Check for image cache directory
* Delete directory
* Use image cache directory

This will cause spawn() to explode. There are various permutations of
this. For example, the following are all possible:

* A simple failure of spawn() with no additional consequences.

* Calling volumeops.attach_disk_to_vm() with a vmdk_path that doesn't
exist. It's not 100% clear to me that ReconfigVM_Task will throw an
error in this case, btw, which would probably be bad.

* Leaving a partially deleted image cache directory which doesn't
contain the base image. This would be really bad.

The last comes about because recursive directory delete isn't atomic,
and may partially succeed, which is a tricky problem. However, in
discussion, Gary also pointed out that directory moves are not atomic
(see MoveDatastoreFile_Task). This is positively nasty. We already knew
that spawn() races with itself to create an image cache directory, and
we've hit this problem in practise. We haven't fixed the race, but we do
manage it. The management relies on the atomicity of a directory move.
Unfortunately it isn't atomic, which presents the potential problem of
spawn() attempting to use an incomplete image cache directory. We also
have the problem of 2 spawns using a linked clone image racing to create
the same resized copy.

We could go through all of the above very carefully to assure ourselves
that we've found all the possible failure paths, and that in every case
the problems are manageable and documented. However, I would place a
good bet that the above is far from a complete list, and we would have
to revisit it in its entirety every time we touched any affected code.
And that would be a lot of code.

We need something to manage concurrent access to resources. In all of
the above cases, if we make the rule that everything which uses an image
cache directory must hold a lock on it whilst using it, all of the above
problems go away. Reasoning about their correctness becomes the
comparatively simple matter of ensuring that the lock is used correctly.
Note that we need locking in both the single and multi node cases,
because even single node is multi-threaded.

The next question is whether that locking needs to be single node or
multi node. Specifically, do we currently, or do we plan to, allow an
architecture where multiple Nova nodes access the same datastore
concurrently. If we do, then we need to find a distributed locking
solution. Ideally this would use the datastore itself for lock
mediation. Failing that, apparently this tool is used elsewhere within
the project:

http://zookeeper.apache.org/doc/trunk/zookeeperOver.html

That would be an added layer of architecture and deployment complexity,
but if we need it, it's there.

If we can confidently say that 2 Nova instances should never be
accessing the same datastore (how about hot/warm/cold failover?), we can
use Nova's internal synchronisation tools. This would simplify matters
greatly!

I think this is one of those areas which is going to improve both the
quality of the driver, and the confidence of reviewers to merge changes.
Right now it takes a lot of brain cycles to work through all the various
paths of a race to work out if any of them are really bad, and it has to
be repeated every time you touch the code. A little up-front effort will
make a whole class of problems go away.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Russell Haering
Vladmir,

Hey, I'm on the team working on this agent, let me offer a little history.
We were working on a system of our own for managing bare metal gear which
we were calling "Teeth". The project was mostly composed of:

1. teeth-agent: an on-host provisioning agent
2. teeth-overlord: a centralized automation mechanism

Plus a few other libraries (including teeth-rest, which contains some
common code we factored out of the agent/overlord).

A few weeks back we decided to shift our focus to using Ironic. At this
point we have effectively abandoned teeth-overlord, and are instead
focusing on upstream Ironic development, continued agent development and
building an Ironic driver capable of talking to our agent.

Over the last few days we've been removing non-OS-approved dependencies
from our agent: I think teeth-rest (and werkzeug, which it depends on) will
be the last to go when we replace it with Pecan+WSME sometime in the next
few days.

Thanks,
Russell


On Fri, Mar 7, 2014 at 8:26 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> As far as I understand, there are 4 projects which are connected with this
> topic. Another two projects which were not mentioned by Devananda are
> https://github.com/rackerlabs/teeth-rest
> https://github.com/rackerlabs/teeth-overlord
>
> Vladimir Kozhukalov
>
>
> On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen <
> devananda@gmail.com> wrote:
>
>> All,
>>
>> The Ironic team has been discussing the need for a "deploy agent" since
>> well before the last summit -- we even laid out a few blueprints along
>> those lines. That work was deferred  and we have been using the same deploy
>> ramdisk that nova-baremetal used, and we will continue to use that ramdisk
>> for the PXE driver in the Icehouse release.
>>
>> That being the case, at the sprint this week, a team from Rackspace
>> shared work they have been doing to create a more featureful hardware agent
>> and an Ironic driver which utilizes that agent. Early drafts of that work
>> can be found here:
>>
>> https://github.com/rackerlabs/teeth-agent
>> https://github.com/rackerlabs/ironic-teeth-driver
>>
>> I've updated the original blueprint and assigned it to Josh. For
>> reference:
>>
>> https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk
>>
>> I believe this agent falls within the scope of the baremetal provisioning
>> program, and welcome their contributions and collaboration on this. To that
>> effect, I have suggested that the code be moved to a new OpenStack project
>> named "openstack/ironic-python-agent". This would follow an independent
>> release cycle, and reuse some components of tripleo (os-*-config). To keep
>> the collaborative momentup up, I would like this work to be done now (after
>> all, it's not part of the Ironic repo or release). The new driver which
>> will interface with that agent will need to stay on github -- or in a
>> gerrit feature branch -- until Juno opens, at which point it should be
>> proposed to Ironic.
>>
>> The agent architecture we discussed is roughly:
>> - a pluggable JSON transport layer by which the Ironic driver will pass
>> information to the ramdisk. Their initial implementation is a REST API.
>> - a collection of hardware-specific utilities (python modules, bash
>> scripts, what ever) which take JSON as input and perform specific actions
>> (whether gathering data about the hardware or applying changes to it).
>> - and an agent which routes the incoming JSON to the appropriate utility,
>> and routes the response back via the transport layer.
>>
>>
>> -Devananda
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-07 Thread Sean Dague
On 03/07/2014 11:16 AM, Russell Bryant wrote:
> On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
>> On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
>>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>>> RBD image support chain
>>>
>>> https://review.openstack.org/#/c/59148/
>>> https://review.openstack.org/#/c/59149/
>>>
>>> are still open after their dependency
>>> https://review.openstack.org/#/c/33409/ was merged.
>>>
>>> These should be low risk as:
>>> 1. We have been testing with this code in place.
>>> 2. It's nearly all contained within the RBD driver.
>>>
>>> This is needed as it implements an essential functionality that has
>>> been missing in the RBD driver and this will become the second release
>>> it's been attempted to be merged into.
>>
>> Add me as a sponsor.
> 
> OK, great.  That's two.
> 
> We have a hard deadline of Tuesday to get these FFEs merged (regardless
> of gate status).
> 

As alt release manager, FFE approved based on Russell's approval.

The merge deadline for Tuesday is the release meeting, not end of day.
If it's not merged by the release meeting, it's dead, no exceptions.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Matt Riedemann



On 3/7/2014 9:39 AM, Matt Riedemann wrote:



On 3/7/2014 4:15 AM, Sean Dague wrote:

On 03/06/2014 04:46 PM, James Carey wrote:

 Please consider a FFE for i18n Message improvements:
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages

 The base enablement for lazy translation has already been sync'd
from oslo.   This patch was to enable lazy translation support in Nova.
  It is titled re-enable lazy translation because this was enabled
during
Havana but was pulled due to issues that have since been resolved.

 In order to enable lazy translation it is necessary to do the
following things:

   (1) Fix a bug in oslo with respect to how keywords are extracted from
the format strings when saving replacement text for use when the message
translation is done.   This is
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
on a fix for in oslo.  Once that is complete it will need to be sync'd
into nova.

   (2) Remove concatenation (+) of translatable messages.  The current
class that is used to hold the translatable message
(gettextutils.Message) does not support concatenation.  There were a few
cases in Nova where this was done and they are coverted to other means
of combining the strings in:
https://review.openstack.org/#/c/78095Remove use of concatenation on
messages

   (3) Remove the use of str() on exceptions.  The intent of this is to
return the message contained in the exception, but these messages may
contain unicode, so str cannot be used on them and gettextutils.Message
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing,
or changed to unicode().  Since unicode() will change to str() in Py3,
the forward compatible six.text_type() is used instead.  This is done
in:
https://review.openstack.org/#/c/78096Remove use of str() on exceptions

   (4) The addition of the call that enables the use of lazy messages.
  This is in:
https://review.openstack.org/#/c/73706Re-enable lazy translation.

 Lazy translation has been enabled in the other projects so it would
be beneficial to be consistent with the other projects with respect to
message translation.


Unless it has landed in *every other* integrated project besides Nova, I
don't find this compelling.

I have tested that the changes in (2) and (3) work

when lazy translation is not enabled.  Thus if a problem is found, the
two line change in (4) could be removed to get to the previous behavior.

 I've been talking to Matt Riedemann and Dan Berrange about this.
  Matt has agreed to be a sponsor.


If this is enabled in other projects, where is the Tempest scenario test
that actually demonstrates that this is working on real installs?

I get that everyone has features that didn't hit. BHowever now is not
that time for that, now is the time for people to get focussed on bugs
hunting. And especially if we are talking about *another* oslo sync.

-1

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The Tempest requirement just came up yesterday.  FWIW, this i18n stuff
has been working it's way in since Grizzly, and the new requirement for
Tempest is new.  I'm not saying it's not valid, but the timing sucks -
but that's life.

Also, the oslo sync would be to one module, gettextutils, which I don't
think pulls in anything else from oslo.

Anyway, this is in Keystone, Glance, Cinder, Neutron and Ceilometer at
least.  Patches are working their way through Heat as I understand it.

I'm not trying to turn this into a crusade, just trying to get out what
I know about the current state of things.  I'll let Jim Carey or Jay
Bryant discuss it more since they've been more involved in the
blueprints across all the projects.



Dan (Smith), Sean, Jim, Jay and myself talked about this in IRC this 
morning and I didn't realize the recent oslo bug was uncovered due to 
something changing in how the audit logs were showing up after the 
initial nova patch was pushed up on 2/14.  So given that, given the 
number of things that would still need to land (oslo patch/sync plus 
possibly a hacking rule for str->six.text_type) AND the (despite being 
late) Tempest test requirement, I'm dropping sponsorship for this.


It simply came in too late and is too risky at this point, so let's 
revisit in Juno and get it turned on early so everyone can be 
comfortable with it.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Russell Bryant
On 03/07/2014 12:09 PM, Matt Riedemann wrote:
> Dan (Smith), Sean, Jim, Jay and myself talked about this in IRC this
> morning and I didn't realize the recent oslo bug was uncovered due to
> something changing in how the audit logs were showing up after the
> initial nova patch was pushed up on 2/14.  So given that, given the
> number of things that would still need to land (oslo patch/sync plus
> possibly a hacking rule for str->six.text_type) AND the (despite being
> late) Tempest test requirement, I'm dropping sponsorship for this.
> 
> It simply came in too late and is too risky at this point, so let's
> revisit in Juno and get it turned on early so everyone can be
> comfortable with it.

OK.  Thanks a lot to everyone who helped evaluate this.  Hopefully we
can get it sorted early in Juno, then.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate release impending

2014-03-07 Thread David Ripton

On 02/26/2014 11:24 AM, David Ripton wrote:

I'd like to release a new version of sqlalchemy-migrate in the next
couple of days.  The only major new feature is DB2 support.  If anyone
thinks this is a bad time, please let me know.


sqlalchemy-migrate 0.9 is now available via pip.  Notable new features 
include DB2 support and compatibility with SQLAlchemy-0.9.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-07 Thread Stan Lagun
Hello everyone!

Actually it is possible to construct YAML-based DSL that has all the
constructs of regular OOP language like Python and at the same time be safe
enough to be used for execution of untrusted code on shared server.

Take a look at Murano DSL.
For example the code above defines class "Instance":
https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.services.Instance/manifest.yaml
The part that may be useful for Mistral is under Workflow key.
Here is some doc on the language:
https://wiki.openstack.org/wiki/Murano/DSL/Blueprint
Technically you can code any workflow that you can in Python using such
language (just don't look at all OOP-related stuff) and it will look very
similar to Python but be safe as you can only call APIs that are explicitly
provided for DSL

Hope this might be helpful for Mistral



On Fri, Mar 7, 2014 at 10:38 AM, Dmitri Zimine  wrote:

> I just moved the sample to Git; let's leverage git review for specific
> comments on the syntax.
>
>
> https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3
>
> DZ>
>
> On Mar 6, 2014, at 10:36 PM, Dmitri Zimine  wrote:
>
> Folks, thanks for the input!
>
> @Joe:
>
> Hopefully Renat covered the differences.  Yet I am interested in how the
> same workflow can be expressed as Salt state(s) or Ansible playbooks. Can
> you (or someone else who knows them well) take a stub?
>
>
> @Joshua
> I am still new to Mistral and learning, but I think it _is_ relevant to
> taskflow. Should we meet, and you help me catch up? Thanks!
>
> @Sandy:
> Aaahr, I used the "D" word?!  :) I keep on arguing that YAML workflow
> representation doesn't make DSL.
>
> And YES to the object model first to define the workflow, with
> YAML/JSON/PythonDSL/what-else as a syntax to build it. We are having these
> discussions on another thread and reviews.
>
> Basically, in order to make a grammar expressive enough to work across a
> web interface, we essentially end up writing a crappy language. Instead,
> we should focus on the callback hooks to something higher level to deal
> with these issues. Minstral should just say "I'm done this task, what
> should I do next?" and the callback service can make decisions on where
> in the graph to go next.
>
>
> There must be some misunderstanding. Mistral _does follow AWS / BPEL
> engines approach, it is both doing "I'm done this task, what should I do
> next?" (executor) and "callback service" (engine that coordinates the flow
> and keeps the state). Like decider and activity workers in AWS Simple
> Workflow.
>
> Engine maintains the state. Executors run tasks. Object model describes
> workflow as a graph of tasks with transitions, conditions, etc. YAML is one
> way to define a workflow. Nothing controversial :)
>
> @all:
>
> Wether one writes Python code or uses yaml? Depends on the user. There are
> good arguments for YAML. But if it's crappy, it looses. We want to see how
> it feels to write it. To me, mixed feelings so far, but promising. What do
> you guys think?
>
> Comments welcome here:
>
> https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3
>
>
> DZ>
>
>
> On Mar 6, 2014, at 10:41 AM, Sandy Walsh 
> wrote:
>
>
>
> On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
>
> IMO, it looks not bad (sorry, I'm biased too) even now. Keep in mind this
> is not the final version, we keep making it more expressive and concise.
>
> As for killer object model it's not 100% clear what you mean. As always,
> devil in the details. This is a web service with all the consequences. I
> assume what you call "object model" here is nothing else but a python
> binding for the web service which we're also working on. Custom python
> logic you mentioned will also be possible to easily integrate. Like I said,
> it's still a pilot stage of the project.
>
>
> Yeah, the REST aspect is where the "tricky" part comes in :)
>
> Basically, in order to make a grammar expressive enough to work across a
> web interface, we essentially end up writing a crappy language. Instead,
> we should focus on the callback hooks to something higher level to deal
> with these issues. Minstral should just say "I'm done this task, what
> should I do next?" and the callback service can make decisions on where
> in the graph to go next.
>
> Likewise with things like sending emails from the backend. Minstral
> should just call a webhook and let the receiver deal with "active
> states" as they choose.
>
> Which is why modelling this stuff in code is usually always better and
> why I'd lean towards the TaskFlow approach to the problem. They're
> tackling this from a library perspective first and then (possibly)
> turning it into a service. Just seems like a better fit. It's also the
> approach taken by Amazon Simple Workflow and many BPEL engines.
>
> -S
>
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 06 Mar 2014, at 22:26, Joshua Harlow  wrote:
>
> That sounds a little similar to what

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Carl Baldwin
I had a reply drafted up to Miguel's original post and now I realize
that I never actually sent it.  :(  So, I'll clean up and update my
draft and send it.  This is a huge impediment to scaling Neutron and I
believe this needs some attention before Icehouse releases.

I believe this problem needs to be tackled on multiple fronts.  I have
been focusing mostly on the L3 agent because routers seem to take a
lot more commands to create and maintain than DHCP namespaces, in
general.  I've created a few patches to address the issues that I've
found.  The patch that Mark mentioned [1] is one potential part of the
solution but it turns out to be one of the more complicated patches to
work out and it keeps falling lower in priority for me.  I have come
back to it this week and will work on it through next week as a higher
priority task.

There are some other recent improvements that have merged to Icehouse
3:  I have changed the iptables lock to avoid contention [2], avoided
an unnecessary RPC call for each router processed [3], and avoided
some unnecessary ip netns calls to check existence of a device [4].  I
feel like I'm just slowly whittling away at the problem.

I'm also throwing around the idea of refactoring the L3 agent to give
precedence to RPC calls on a restart [5].  There is a very rough
preview up that I put up yesterday evening to get feedback on the
approach that I'm thinking of taking.  This should make the agent more
responsive to changes that come in through RPC.  This is less of a win
on reboot than on a simple agent process restart.

Another thing that we've found to help is to delete namespaces when a
router or dhcp server namespace is no longer needed [6].  We've
learned that having vestigial namespaces hanging around and
accumulating when they are no longer needed adversely affects the
performance of all "ip netns exec" commands.  There are some sticky
kernel issues related to using this patch.  That is why the default
configuration is to not delete namespaces.  See the "Related-Bug"
referenced by that commit message.

I'm intrigued by the idea of writing a rootwrap compatible alternative
in C.  It might even be possible to replace sudo + rootwrap
combination with a single, stand-alone executable with setuid
capability of elevating permissions on its own.  I know it breaks the
everything-in-python pattern that has been established but this sort
of thing is sensitive enough to start-up time that it may be worth it.
 I think we've shown that some of the OpenStack projects, namely Nova
and Neutron, run enough commands at scale that this performance really
matters.  My plate is full enough that I cannot imagine taking on this
kind of task at this time.  Does anyone have any interest in making
this a reality?

A C version of rootwrap could do some of the more common and simple
command verification and punt anything that fails to the python
version of rootwrap with an exec.  That would ease the burden of
keeping it in sync and feature compatible with the python version and
allow python developers to continue developing root wrap in python.

Carl

[1] https://review.openstack.org/#/c/67490/
[2] https://review.openstack.org/#/c/67558/
[3] https://review.openstack.org/#/c/66928/
[4] https://review.openstack.org/#/c/67475/
[5] https://review.openstack.org/#/c/78819/
[6] https://review.openstack.org/#/c/56114/

On Fri, Mar 7, 2014 at 9:22 AM, Mark McClain  wrote:
>
> On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo  wrote:
>
>>
>> Yes, one option could be to coalesce all calls that go into
>> a namespace into a shell script and run this in the
>> ootwrap > ip netns exec
>>
>> But we might find a mechanism to determine if some of the steps failed, and 
>> what was the result / output, something like failing line + result code. I'm 
>> not sure if we rely on stdout/stderr results at any time.
>>
>
> This is exactly one of the items Carl Baldwin has been investigating.  Have 
> you checked out his early work? [1]
>
> mark
>
> [1] https://review.openstack.org/#/c/67490/
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Locking: A can of worms

2014-03-07 Thread Joshua Harlow
Tooz folks has been thinking about this problem (as well as myself) for a
little while.

I've started something like: https://review.openstack.org/#/c/71167/

Also: https://wiki.openstack.org/wiki/StructuredWorkflowLocks

Perhaps we can get more movement on that (sorry I haven't had tons of time
to move forward on that review).

Something generic (aka a lock provider that can use different locking
backends to satisfy different desired lock 'requirements') might be useful
for everyone to help avoid these problems? Or at least allow individual
requirements for locks to be managed by a well supported library.

-Original Message-
From: Matthew Booth 
Organization: Red Hat
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Friday, March 7, 2014 at 8:53 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [nova][vmware] Locking: A can of worms

>We need locking in the VMware driver. There are 2 questions:
>
>1. How much locking do we need?
>2. Do we need single-node or multi-node locking?
>
>I believe these are quite separate issues, so I'm going to try not to
>confuse them. I'm going to deal with the first question first.
>
>In reviewing the image cache ageing patch, I came across a race
>condition between cache ageing and spawn(). One example of the race is:
>
>Cache Ageingspawn()
>* Check timestamps
>* Delete timestamp
>* Check for image cache directory
>* Delete directory
>* Use image cache directory
>
>This will cause spawn() to explode. There are various permutations of
>this. For example, the following are all possible:
>
>* A simple failure of spawn() with no additional consequences.
>
>* Calling volumeops.attach_disk_to_vm() with a vmdk_path that doesn't
>exist. It's not 100% clear to me that ReconfigVM_Task will throw an
>error in this case, btw, which would probably be bad.
>
>* Leaving a partially deleted image cache directory which doesn't
>contain the base image. This would be really bad.
>
>The last comes about because recursive directory delete isn't atomic,
>and may partially succeed, which is a tricky problem. However, in
>discussion, Gary also pointed out that directory moves are not atomic
>(see MoveDatastoreFile_Task). This is positively nasty. We already knew
>that spawn() races with itself to create an image cache directory, and
>we've hit this problem in practise. We haven't fixed the race, but we do
>manage it. The management relies on the atomicity of a directory move.
>Unfortunately it isn't atomic, which presents the potential problem of
>spawn() attempting to use an incomplete image cache directory. We also
>have the problem of 2 spawns using a linked clone image racing to create
>the same resized copy.
>
>We could go through all of the above very carefully to assure ourselves
>that we've found all the possible failure paths, and that in every case
>the problems are manageable and documented. However, I would place a
>good bet that the above is far from a complete list, and we would have
>to revisit it in its entirety every time we touched any affected code.
>And that would be a lot of code.
>
>We need something to manage concurrent access to resources. In all of
>the above cases, if we make the rule that everything which uses an image
>cache directory must hold a lock on it whilst using it, all of the above
>problems go away. Reasoning about their correctness becomes the
>comparatively simple matter of ensuring that the lock is used correctly.
>Note that we need locking in both the single and multi node cases,
>because even single node is multi-threaded.
>
>The next question is whether that locking needs to be single node or
>multi node. Specifically, do we currently, or do we plan to, allow an
>architecture where multiple Nova nodes access the same datastore
>concurrently. If we do, then we need to find a distributed locking
>solution. Ideally this would use the datastore itself for lock
>mediation. Failing that, apparently this tool is used elsewhere within
>the project:
>
>http://zookeeper.apache.org/doc/trunk/zookeeperOver.html
>
>That would be an added layer of architecture and deployment complexity,
>but if we need it, it's there.
>
>If we can confidently say that 2 Nova instances should never be
>accessing the same datastore (how about hot/warm/cold failover?), we can
>use Nova's internal synchronisation tools. This would simplify matters
>greatly!
>
>I think this is one of those areas which is going to improve both the
>quality of the driver, and the confidence of reviewers to merge changes.
>Right now it takes a lot of brain cycles to work through all the various
>paths of a race to work out if any of them are really bad, and it has to
>be repeated every time you touch the code. A little up-front effort will
>make a whole class of problems go away.
>
>Matt
>-- 
>Matthew Booth, RHCA, RHCSS
>Red Hat

Re: [openstack-dev] [nova][vmware] Locking: A can of worms

2014-03-07 Thread Doug Hellmann
On Fri, Mar 7, 2014 at 11:53 AM, Matthew Booth  wrote:

> We need locking in the VMware driver. There are 2 questions:
>
> 1. How much locking do we need?
> 2. Do we need single-node or multi-node locking?
>
> I believe these are quite separate issues, so I'm going to try not to
> confuse them. I'm going to deal with the first question first.
>
> In reviewing the image cache ageing patch, I came across a race
> condition between cache ageing and spawn(). One example of the race is:
>
> Cache Ageingspawn()
> * Check timestamps
> * Delete timestamp
> * Check for image cache directory
> * Delete directory
> * Use image cache directory
>
> This will cause spawn() to explode. There are various permutations of
> this. For example, the following are all possible:
>
> * A simple failure of spawn() with no additional consequences.
>
> * Calling volumeops.attach_disk_to_vm() with a vmdk_path that doesn't
> exist. It's not 100% clear to me that ReconfigVM_Task will throw an
> error in this case, btw, which would probably be bad.
>
> * Leaving a partially deleted image cache directory which doesn't
> contain the base image. This would be really bad.
>
> The last comes about because recursive directory delete isn't atomic,
> and may partially succeed, which is a tricky problem. However, in
> discussion, Gary also pointed out that directory moves are not atomic
> (see MoveDatastoreFile_Task). This is positively nasty. We already knew
> that spawn() races with itself to create an image cache directory, and
> we've hit this problem in practise. We haven't fixed the race, but we do
> manage it. The management relies on the atomicity of a directory move.
> Unfortunately it isn't atomic, which presents the potential problem of
> spawn() attempting to use an incomplete image cache directory. We also
> have the problem of 2 spawns using a linked clone image racing to create
> the same resized copy.
>
> We could go through all of the above very carefully to assure ourselves
> that we've found all the possible failure paths, and that in every case
> the problems are manageable and documented. However, I would place a
> good bet that the above is far from a complete list, and we would have
> to revisit it in its entirety every time we touched any affected code.
> And that would be a lot of code.
>
> We need something to manage concurrent access to resources. In all of
> the above cases, if we make the rule that everything which uses an image
> cache directory must hold a lock on it whilst using it, all of the above
> problems go away. Reasoning about their correctness becomes the
> comparatively simple matter of ensuring that the lock is used correctly.
> Note that we need locking in both the single and multi node cases,
> because even single node is multi-threaded.
>
> The next question is whether that locking needs to be single node or
> multi node. Specifically, do we currently, or do we plan to, allow an
> architecture where multiple Nova nodes access the same datastore
> concurrently. If we do, then we need to find a distributed locking
> solution. Ideally this would use the datastore itself for lock
> mediation. Failing that, apparently this tool is used elsewhere within
> the project


We do have such a case in nova (
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L179),
which led to us restoring file locking in lockutils (
https://review.openstack.org/#/c/77297/).

There is also work being done to create a library called "tooz" (
http://git.openstack.org/cgit/stackforge/tooz) for abstracting some of the
distributed coordination patterns to allow deployers to choose between
platforms like zookeeper and etcd, just as we do with virt drivers and
message queues. It would be good if any work in this area coordinated with
the tooz team -- I anticipate the library being brought into Oslo in the
future.

Doug



> :
>
> http://zookeeper.apache.org/doc/trunk/zookeeperOver.html
>
> That would be an added layer of architecture and deployment complexity,
> but if we need it, it's there.
>
> If we can confidently say that 2 Nova instances should never be
> accessing the same datastore (how about hot/warm/cold failover?), we can
> use Nova's internal synchronisation tools. This would simplify matters
> greatly!
>
> I think this is one of those areas which is going to improve both the
> quality of the driver, and the confidence of reviewers to merge changes.
> Right now it takes a lot of brain cycles to work through all the various
> paths of a race to work out if any of them are really bad, and it has to
> be repeated every time you touch the code. A little up-front effort will
> make a whole class of problems go away.
>
> Matt
> --
> Matthew Booth, RHCA, RHCSS
> Red Hat Engineering, Virtualisation Team
>
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
>
> 

Re: [openstack-dev] [nova] bugs that needs to be reviewed

2014-03-07 Thread Ben Nemec
Please see Thierry's post on review requests: 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html


-Ben

On 2014-03-07 02:41, sahid wrote:

Greetings,

There are two fixes for bugs that need to be reviewed. One for
the feature shelve instance and the other one for the API to get
the list of migrations in progress.

These two bugs are marked to high and medium because they broke
feature. The code was push several months ago, if some cores can
take a look.

Fix: Unshelving an instance uses original image
https://review.openstack.org/#/c/72407/

Fix: Fix unicode error in os-migrations
https://review.openstack.org/#/c/61717/

Thanks,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-07 Thread Collins, Sean
Hi,

I know everyone is very busy working on the I-3 release, but I wanted
solicit feedback on my developer documentation review.

https://review.openstack.org/#/c/72428/

Currently, we have a couple +1's and a +2 from Nachi - Akihirio Motoki
and Henry Gessau brought up valid points about the TESTING file, but I
felt that they can be addressed in other patchsets - since I don't feel
confident in making the changes that they suggested, and some of the
comments refer to issues that exist in the 

In addition, we're almost to the goal line - I'd rather get some docs
merged and update in another patchset, rather than update this review
with another patchset, that clears out all the +2's and +1's and then
wait for everyone to circle back and re-approve.

I understand this is a unusual request and I am asking for special
treatment, but Neutron is severely lacking in documentation for new
developers and I think this patch moves us closer to fixing it.

Thoughts?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-07 Thread Jay S Bryant
From:   Russell Bryant 
To: openstack-dev@lists.openstack.org, 
Date:   03/07/2014 11:31 AM
Subject:Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message 
improvements



On 03/07/2014 12:09 PM, Matt Riedemann wrote:
> Dan (Smith), Sean, Jim, Jay and myself talked about this in IRC this
> morning and I didn't realize the recent oslo bug was uncovered due to
> something changing in how the audit logs were showing up after the
> initial nova patch was pushed up on 2/14.  So given that, given the
> number of things that would still need to land (oslo patch/sync plus
> possibly a hacking rule for str->six.text_type) AND the (despite being
> late) Tempest test requirement, I'm dropping sponsorship for this.
> 
> It simply came in too late and is too risky at this point, so let's
> revisit in Juno and get it turned on early so everyone can be
> comfortable with it.

OK.  Thanks a lot to everyone who helped evaluate this.  Hopefully we
can get it sorted early in Juno, then.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


All,

Thanks for taking the time to consider/discuss this.

Some good lessons learned on this end.

Look forward to getting this in early in Juno.

Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-07 Thread Nader Lahouti
1) Does it mean an interim solution is to have our own plugin (and have all
the changes in it) and declare it as core_plugin instead of Ml2Plugin?

2) The other issue as I mentioned before, is that the extension(s) is not
showing up in the result, for instance when create_network is called
[*result = super(Ml2Plugin, self).create_network(context, network)]*, and
as a result they cannot be used in the mechanism drivers when needed.

Looks like the process_extensions is disabled when fix for Bug 1201957
committed and here is the change:
Any idea why it is disabled?

--
Avoid performing extra query for fetching port security binding

Bug 1201957


Add a relationship performing eager load in Port and Network

models, thus preventing the 'extend' function from performing

an extra database query.

Also fixes a comment in securitygroups_db.py


Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

 master   h.1

...

 2013.2

commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

Salvatore Orlando salv-orlando authored 8 months ago


2  neutron/db/db_base_plugin_v2.py View

 @@ -995,7 +995,7 @@ def create_network(self, context, network):

995   'status': constants.NET_STATUS_ACTIVE}

996   network = models_v2.Network(**args)

997   context.session.add(network)

*998 -return self._make_network_dict(network)*

*998 +return self._make_network_dict(network,
process_extensions=False)*

999

1000  def update_network(self, context, id, network):

1001

 n = network['network']

---


Regards,
Nader.





On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura wrote:

>
> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
>
> Yes, that sounds good to be able to load extensions from a mechanism
> driver.
>
> But another problem I think we have with ML2 plugin is the list extensions
> supported by default [1].
> The extensions should only load by MD and the ML2 plugin should only
> implement the Neutron core API.
>
>
> Keep in mind that ML2 supports multiple MDs simultaneously, so no single
> MD can really control what set of extensions are active. Drivers need to be
> able to load private extensions that only pertain to that driver, but we
> also need to be able to share common extensions across subsets of drivers.
> Furthermore, the semantics of the extensions need to be correct in the face
> of multiple co-existing drivers, some of which know about the extension,
> and some of which don't. Getting this properly defined and implemented
> seems like a good goal for juno.
>
> -Bob
>
>
>
>  Any though ?
> Édouard.
>
>  [1]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
>
>
>
> On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:
>
>> Hi,
>>
>> I think it is better to continue the discussion here. It is a good log :-)
>>
>> Eugine and I talked the related topic to allow drivers to load
>> extensions)  in Icehouse Summit
>> but I could not have enough time to work on it during Icehouse.
>> I am still interested in implementing it and will register a blueprint on
>> it.
>>
>> etherpad in icehouse summit has baseline thought on how to achieve it.
>> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
>> I hope it is a good start point of the discussion.
>>
>> Thanks,
>> Akihiro
>>
>> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
>> wrote:
>> > Hi Kyle,
>> >
>> > Just wanted to clarify: Should I continue using this mailing list to
>> post my
>> > question/concerns about ML2? Please advise.
>> >
>> > Thanks,
>> > Nader.
>> >
>> >
>> >
>> > On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery > >
>> > wrote:
>> >>
>> >> Thanks Edgar, I think this is the appropriate place to continue this
>> >> discussion.
>> >>
>> >>
>> >> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana 
>> wrote:
>> >>>
>> >>> Nader,
>> >>>
>> >>> I would encourage you to first discuss the possible extension with the
>> >>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
>> meeting
>> >>> every week:
>> >>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>> >>>
>> >>> Bring your concerns on this meeting and get the right feedback.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Edgar
>> >>>
>> >>> From: Nader Lahouti 
>> >>> Reply-To: OpenStack List 
>> >>> Date: Thursday, March 6, 2014 12:14 PM
>> >>> To: OpenStack List 
>> >>> Subject: Re: [openstack-dev] [Neutron][ML2]
>> >>>
>> >>> Hi Aaron,
>> >>>
>> >>> I appreciate your reply.
>> >>>
>> >>> Here is some more details on what I'm trying to do:
>> >>> I need to add new attribute to the network resource using extensions
>> >>> (i.e. network config profile) and use it in the mechanism driver (in
>> the
>> >>> create_network_precommit/postcommit).
>> >>> If I use current implementation of Ml2Plugin, when a call is made to
>> >>> mechanism driver's create_network_precommit/postcommit the new
>> attribute is
>> >>> not included in the 'mech_context'
>> >>> Here is code from Ml2Plugin:
>>

Re: [openstack-dev] [Neutron] Developer documentation

2014-03-07 Thread Edgar Magana
I agree this is a valid request. As a team we should consider this special
request and help is possible.
Specially on Documentation!!  :-)

Edgar

On 3/7/14 10:12 AM, "Collins, Sean" 
wrote:

>Hi,
>
>I know everyone is very busy working on the I-3 release, but I wanted
>solicit feedback on my developer documentation review.
>
>https://review.openstack.org/#/c/72428/
>
>Currently, we have a couple +1's and a +2 from Nachi - Akihirio Motoki
>and Henry Gessau brought up valid points about the TESTING file, but I
>felt that they can be addressed in other patchsets - since I don't feel
>confident in making the changes that they suggested, and some of the
>comments refer to issues that exist in the
>
>In addition, we're almost to the goal line - I'd rather get some docs
>merged and update in another patchset, rather than update this review
>with another patchset, that clears out all the +2's and +1's and then
>wait for everyone to circle back and re-approve.
>
>I understand this is a unusual request and I am asking for special
>treatment, but Neutron is severely lacking in documentation for new
>developers and I think this patch moves us closer to fixing it.
>
>Thoughts?
>
>-- 
>Sean M. Collins
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Locking: A can of worms

2014-03-07 Thread Shawn Hartsock
On Fri, Mar 7, 2014 at 12:55 PM, Doug Hellmann
 wrote:
> http://git.openstack.org/cgit/stackforge/tooz


This is good stuff. The only solution I could come up with for image
cache management was to have yet another service that you would set up
to manage the shared image cache resource. The current solution
implementation that I've seen relies on the file system resource
(under the image caches) to handle the semaphores between the Nova
nodes.

This is complicated, but not impossible to do correctly. The current
design reminds me of a distributed fault tolerant system I worked on
in the 1990's ... it was designed in the 1980s and used almost
identical locking semantics relying on directory names to determine
which process had lock. So this problem set isn't anything terribly
new and the proposed solutions aren't anything terribly strange or
clever.

What I like about the tooz solution is we've found a way to address
that same old coordination problem without building yet another
service AND without monkeying about with distributed file system locks
read by NFS or some other file system sharing mechanism. Instead,
we've got this back-channel for coordinating cooperative services.

I like this solution a lot because it steps outside the initial
problem definition and comes up with something that truly solves the
problem, the issue of "who's turn is it?"

If someone could point me at some nice primer code for each system I
think I might be able to render a more informed opinion.

Thanks.



-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-07 Thread Akihiro Motoki
Hi Carl,

I really appreciate you for writing this up!
I have no hurdle to remove my -1 for the review.
I just thought it is an item easy to fix.

It is a great start line towards useufl developer documentation.

Thanks,
Akihiro


On Sat, Mar 8, 2014 at 3:12 AM, Collins, Sean
 wrote:
> Hi,
>
> I know everyone is very busy working on the I-3 release, but I wanted
> solicit feedback on my developer documentation review.
>
> https://review.openstack.org/#/c/72428/
>
> Currently, we have a couple +1's and a +2 from Nachi - Akihirio Motoki
> and Henry Gessau brought up valid points about the TESTING file, but I
> felt that they can be addressed in other patchsets - since I don't feel
> confident in making the changes that they suggested, and some of the
> comments refer to issues that exist in the
>
> In addition, we're almost to the goal line - I'd rather get some docs
> merged and update in another patchset, rather than update this review
> with another patchset, that clears out all the +2's and +1's and then
> wait for everyone to circle back and re-approve.
>
> I understand this is a unusual request and I am asking for special
> treatment, but Neutron is severely lacking in documentation for new
> developers and I think this patch moves us closer to fixing it.
>
> Thoughts?
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TROVE] Trove capabilities

2014-03-07 Thread Michael Basnight
Denis Makogon  writes:

> Let me elaborate a bit.
>
> [...]
>
> Without storing static API contract capabilities gonna be 100% corrupted,
> because there's no API description inside Trove.

Im not sure i understand what you mean here.

> About reloading, file should not be reloaded, file is a static unmodifiable
> template. As the response users sees the combinition of described contract
> (from file) and all capabilities inside database table (as i described in
> my first emai). To simplify file usage we could deserialize it into dict
> object and make it as singleton to use less memory.

Im not a fan, at all, of a file based config for this. It will be
exposed to users, and can be modeled in the database just fine.

> About Kalebs design and whole appoach. Without blocking/allowing certain
> capabilities for datastores whole approach is nothing else than "help
> document" and this is not acceptable.

Incorrect. Its a great first approach. We can use this, and the entries
in the db, to conditionally expose things in the extensions.

> Approach of capabilities is: "In runtime block/allow tasks to be executed
> over certain objects". That why i stand on re-reviewing whole design from
> scratch with all contributors.

We can do this _using_ kalebs work in the future, when we refactor
extensions. Weve had this conversation already, a few months ago when
the redis guys first came aboard. re-review is not needed. Kalebs stuff
is completely valid, and anything you are talking about can be used by
the user facing capabilities.

ON A SEPARATE NOTE

This is one of the hardest message threads ive ever read.  If you guys
(everyone on the thread, im talking to you) dont use bottom posting, and
quote the sections you are answering, then its nearly impossible to
follow. Plz follow the conventions the rest of the OpenStack community
uses. So if there is more to discuss, lets continue, using above as a
proper way to do it.


pgpWXbIs9UV0C.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-07 Thread Akihiro Motoki
Hi Robert,

Thanks for the clarification. I understand the motivation.

I think the problem can be split into two categories:
(a) user configurable rules vs infra enforced rule, and
(b) DHCP/RA service exists inside or outside of Neutron

Regarding (a), I believe DHCP or RA related rules is better to be handled
by the infra side because it is required to ensure DHCP/RA works well.
I don't think it is a good idea to delegate users to configure rule to
allow them.
It works as long as DHCP/RA service works inside OpenStack.
This is the main motivation of my previous question.

On the other hand, there is no way to cooperate with DHCP/RA
services outside of OpenStack at now. This blocks the usecase in your mind.
It is true that the current Neutron cannot works with dhcp server
outside of neutron.

I agree that adding a security group rule to allow RA is reasonable as
a workaround.
However, for a long time solution, it is better to explore a way to configure
infra-required rules.

Thanks,
Akihiro


On Sat, Mar 8, 2014 at 12:50 AM, Robert Li (baoli)  wrote:
> Hi Akihiro,
>
> In the case of IPv6 RA, its source IP is a Link Local Address from the
> router's RA advertising interface. This LLA address is automatically
> generated and not saved in the neutron port DB. We are exploring the idea
> of retrieving this LLA if a native openstack RA service is running on the
> subnet.
>
> Would SG be needed with a provider net in which the RA service is running
> external to openstack?
>
> In the case of IPv4 DHCP, the dhcp port is created by the dhcp service,
> and the dhcp server ip address is retrieved from this dhcp port. If the
> dhcp server is running outside of openstack, and if we'd only allow dhcp
> packets from this server, how is it done now?
>
> thanks,
> Robert
>
> On 3/7/14 12:00 AM, "Akihiro Motoki"  wrote:
>
>>I wonder why RA needs to be exposed by security group API.
>>Does a user need to configure security group to allow IPv6 RA? or
>>should it be allowed in infra side?
>>
>>In the current implementation DHCP packets are allowed by provider
>>rule (which is hardcoded in neutron code now).
>>I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
>>don't need to expose RA in security group API.
>>Am I missing something?
>>
>>Thanks,
>>Akihiro
>>
>>On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng  wrote:
>>> I created a new blueprint [1] which is triggered by the requirement to
>>>allow
>>> IPv6 Router Advertisement security group rule on compute node in my
>>>on-going
>>> code review [2].
>>>
>>> Currently, only security group rule direction, protocol, ethertype and
>>>port
>>> range are supported by neutron security group rule data structure. To
>>>allow
>>> Router Advertisement coming from network node or provider network to VM
>>>on
>>> compute node, we need to specify ICMP type to only allow RA from known
>>>hosts
>>> (network node dnsmasq binded IP or known provider gateway).
>>>
>>> To implement this and make the implementation extensible, maybe we can
>>>add
>>> an additional table name "SecurityGroupRuleData" with Key, Value and ID
>>>in
>>> it. For ICMP type RA filter, we can add key="icmp-type" value="134", and
>>> security group rule to the table. When other ICMP type filters are
>>>needed,
>>> similar records can be stored. This table can also be used for other
>>> firewall rule key values.
>>> API change is also needed.
>>>
>>> Please let me know your comments about this blueprint.
>>>
>>> [1]
>>>
>>>https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type-f
>>>ilter
>>> [2] https://review.openstack.org/#/c/72252/
>>>
>>> Thank you!
>>> Xuhan Peng
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] datastore version-specific disk-image-builder element

2014-03-07 Thread Michael Basnight
"Lowery, Mathew"  writes:

> We have a need to support multiple versions of a single datastore with
> a single set of disk-image-builder elements. Let's take MySQL on
> Ubuntu as an example. 5.5 is already present in
> trove-integration. Let's say we want to add 5.6 to coexist. Let's also
> assume that simply asking for the existing apt repository for MySQL
> 5.6 is not possible (i.e. another apt repo is required). In
> trove-integration/scripts/files/elements, two approaches come to mind.
>
> Approach #1: modify ubuntu-mysql/install.d/10-mysql with a case statement
> Approach #2: add ubuntu-mysql-5.6/install.d/10-mysql

At present I much prefer Approach 2. If there is anything we need thats
a common element between both mysql, we can use ubuntu-mysql for
that. Then ubuntu-mysql-X is just the special bits around the indiv
version.


pgpN0YjNeNSmN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] New name voting results (round #2)

2014-03-07 Thread Dina Belova
Hello everyone!

I've closed round #2 of new name choosing voting for Climate:

http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_e2f53ce1b331590f

*Blazar* is the most popular variant, heh :)

1. *Blazar*  (Not defeated in any contest vs. another choice)2. Forecast,
loses to Blazar by 6-53. Prophecy, loses to Forecast by 6-34. Reserva,
loses to Prophecy by 5-45. Cast, loses to Reserva by 7-4
I remind you, that we needed new name because of some PyPi and readthedocs
repos issues + trademark possible issues :)

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-07 Thread Miguel Angel Ajo


Good work on the documentation, it's something that will really help
new neutron developers.



On 03/07/2014 07:44 PM, Akihiro Motoki wrote:

Hi Carl,

I really appreciate you for writing this up!
I have no hurdle to remove my -1 for the review.
I just thought it is an item easy to fix.

It is a great start line towards useufl developer documentation.

Thanks,
Akihiro


On Sat, Mar 8, 2014 at 3:12 AM, Collins, Sean
 wrote:

Hi,

I know everyone is very busy working on the I-3 release, but I wanted
solicit feedback on my developer documentation review.

https://review.openstack.org/#/c/72428/

Currently, we have a couple +1's and a +2 from Nachi - Akihirio Motoki
and Henry Gessau brought up valid points about the TESTING file, but I
felt that they can be addressed in other patchsets - since I don't feel
confident in making the changes that they suggested, and some of the
comments refer to issues that exist in the

In addition, we're almost to the goal line - I'd rather get some docs
merged and update in another patchset, rather than update this review
with another patchset, that clears out all the +2's and +1's and then
wait for everyone to circle back and re-approve.

I understand this is a unusual request and I am asking for special
treatment, but Neutron is severely lacking in documentation for new
developers and I think this patch moves us closer to fixing it.

Thoughts?

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Blueprint cinder-rbd-driver-qos

2014-03-07 Thread Josh Durgin

On 03/05/2014 07:24 AM, git harry wrote:

Hi,

https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos

I've been looking at this blueprint with a view to contributing on it, assuming 
I can take it. I am unclear as to whether or not it is still valid. I can see 
that it was registered around a year ago and it appears the functionality is 
essentially already supported by using multiple backends.

Looking at the existing drivers that have qos support it appears IOPS etc are 
available for control/customisation. As I understand it  Ceph has no qos type 
control built-in and creating pools using different hardware is as granular as 
it gets. The two don't quite seem comparable to me so I was hoping to get some 
feedback, as to whether or not this is still useful/appropriate, before 
attempting to do any work.


Ceph does not currently have any qos support, but relies on QEMU's io
throttling, which Cinder and Nova can configure.

There is interest in adding better throttling to Ceph itself though,
since writes from QEMU may be combined before writing to Ceph when
caching is used. There was a session on this at the Ceph developer
summit earlier this week:

https://wiki.ceph.com/Planning/CDS/CDS_Giant_%28Mar_2014%29#rbd_qos

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-07 Thread Tim Bell

The recent operator gathering 
(https://etherpad.openstack.org/p/operators-feedback-mar14) concluded a similar 
proposal, based on Blueprint-on-Blueprints (BoB for short).

The aim was that operators of production OpenStack clouds should engage to give 
input at an early stage

- Raising concerns at the specification stage will be much more productive than 
after the code is written
- Operators may not have the in-depth python skills for productive 
participation in the later stages of the review
- Appropriate credit can also be granted to those people refining the 
requirements/specification process. Spotting an issue before coding starts is a 
major contribution.

The sort of items we were worrying about were:

Monitoring
Metrics
Alerts
Health check/http ping
Logging
Lifecycle
 HA
 test
 upgrade
 roll-back
 restart
Documentation
object state/flow
usage
debug

We also need to review the weighting. A blueprint should not be indefinitely 
postponed due to a differing view on requirements between new function and the 
ability to run the existing environments.

Tim

> -Original Message-
> From: Murray, Paul (HP Cloud Services) [mailto:pmur...@hp.com]
> Sent: 07 March 2014 14:57
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint 
> review & approval
> 
> The principle is excellent, I think there are two points/objectives worth 
> keeping in mind:
> 
> 1. We need an effective way to make and record the design decisions 2. We 
> should make the whole development process easier
> 
> In my mind the point of the design review part is to agree up front something 
> that should not be over-turned (or is hard to over-turn)
> late in patch review. I agree with others that a patch should not be blocked 
> (or should be hard to block) because the reviewer
> disagrees with an agreed design decision. Perhaps an author can ask for a -2 
> or -1 to be removed if they can point out the agreed
> design decision, without having to reopen the debate.
> 
> I also think that blueprints tend to have parts that should be agreed up 
> front, like changes to apis, database migrations, or
> integration points in general. They also have parts that don't need to be 
> agreed up front, there is no point in a heavyweight process
> for everything. Some blueprints might not need any of this at all. For 
> example, a new plugin for the filter scheduler might no need a
> lot of design review, or at least, adding the design review is unlikely to 
> ease the development cycle.
> 
> So, we could use the blueprint template to identify things that need to be 
> agreed in the design review. These could include anything
> the proposer wants agreed up front and possibly specifics of a defined set of 
> integration points. Some blueprints might have nothing
> to be formally agreed in design review. Additionally, sometimes plans change, 
> so it should be possible to return to design review.
> Possibly the notion of a design decision could be broken out form a blueprint 
> in the same way as a patch-set? maybe it only makes
> sense to do it as a whole? Certainly design decisions should be made in 
> relation to other blueprints and so it should be easy to see
> that there are two blueprints making related design decisions.
> 
> The main point is that there should be an identifiable set of design 
> decisions that have reviewed and agreed that can also be found.
> 
> **The reward for authors in doing this is the author can defend their 
> patch-set against late objections to design decisions.** **The
> reward for reviewers is they get a way to know what has been agreed in 
> relation to a blueprint.**
> 
> On another point...
> ...sometimes I fall foul of writing code using an approach I have seen in the 
> code base, only to be told it was decided not to do it that
> way anymore. Sometimes I had no way of knowing that, and exactly what has 
> been decided, when it was decided, and who did the
> deciding has been lost. Clearly the PTL and ML do help out here, but it would 
> be helpful if such things were easy to find out. These
> kinds of design decision should be reviewed and recorded.
> 
> Again, I think it is excellent that this is being addressed.
> 
> Paul.
> 
> 
> 
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 07 March 2014 12:01
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint 
> review & approval
> 
> On 03/07/2014 06:30 AM, Thierry Carrez wrote:
> > Sean Dague wrote:
> >> One of the issues that the Nova team has definitely hit is Blueprint
> >> overload. At some point there were over 150 blueprints. Many of them
> >> were a single sentence.
> >>
> >> The results of this have been that design review today is typically
> >> not happeni

[openstack-dev] Constructive Conversations

2014-03-07 Thread Kurt Griffiths
Folks,

I’m sure that I’m not the first person to bring this up, but I’d like to get 
everyone’s thoughts on what concrete actions we, as a community, can take to 
improve the status quo.

There have been a variety of instances where community members have expressed 
their ideas and concerns via email or at a summit, or simply submitted a patch 
that perhaps challenges someone’s opinion of The Right Way to Do It, and 
responses to that person have been far less constructive than they could have 
been[1]. In an open community, I don’t expect every person who comments on a ML 
post or a patch to be congenial, but I do expect community leaders to lead by 
example when it comes to creating an environment where every person’s voice is 
valued and respected.

What if every time someone shared an idea, they could do so without fear of 
backlash and bullying? What if people could raise their concerns without being 
summarily dismissed? What if “seeking first to understand”[2] were a core value 
in our culture? It would not only accelerate our pace of innovation, but also 
help us better understand the needs of our cloud users, helping ensure we 
aren’t just building OpenStack in the right way, but also building the right 
OpenStack.

We need open minds to build an open cloud.

Many times, we do have wonderful, constructive discussions, but the times we 
don’t cause wounds in the community that take a long time to heal. 
Psychologists tell us that it takes a lot of good experiences to make up for 
one bad one. I will be the first to admit I’m not perfect. Communication is 
hard. But I’m convinced we can do better. We must do better.

How can we build on what is already working, and make the bad experiences as 
rare as possible?

A few ideas to seed the discussion:

  *   Identify a set of core values that the community already embraces for the 
most part, and put them down “on paper.”[3] Leaders can keep these values fresh 
in everyone’s minds by (1) leading by example, and (2) referring to them 
regularly in conversations and talks.
  *   PTLs can add mentoring skills and a mindset of "seeking first to 
understand” to their list of criteria for evaluating proposals to add a 
community member to a core team.
  *   Get people together in person, early and often. Mid-cycle meetups and 
mini-summits provide much higher-resolution communication channels than email 
and IRC, and are great ways to clear up misunderstandings, build relationships 
of trust, and generally get everyone pulling in the same direction.

What else can we do?

Kurt

[1] There are plenty of examples, going back years. Anyone who has been in the 
community very long will be able to recall some to mind. Recent ones I thought 
of include Barbican’s initial request for incubation on the ML, dismissive and 
disrespectful exchanges in some of the design sessions in Hong Kong (bordering 
on personal attacks), and the occasional “WTF?! This is the dumbest idea ever!” 
patch comment.
[2] https://www.stephencovey.com/7habits/7habits-habit5.php
[3] We already have a code of 
conduct but I think 
a list of core values would be easier to remember and allude to in day-to-day 
discussions. I’m trying to think of ways to make this idea practical. We need 
to stand up for our values, not just say we have them.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-07 Thread Collins, Sean
On Fri, Mar 07, 2014 at 02:20:24PM EST, Miguel Angel Ajo wrote:
> 
> Good work on the documentation, it's something that will really help
> new neutron developers.
> 

Thank you everyone! I am very happy!

I'll start cranking out some patchsets to address some of the comments
that Henry Gessau, Akihiro Motoki and others highlighted.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Joe Gordon
On Fri, Mar 7, 2014 at 12:27 AM, Miguel Angel Ajo wrote:

>
> I'm really happy to see that I'm not the only one concerned about
> performance.
>
>
> I'm reviewing the thread, and summarizing / replying to multiple people on
> the thread:
>
>
> Ben Nemec,
>
> * Thanks for pointing us to the previous thread about this topic:
> http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html
>
>
> Rick Jones,
>
> * iproute commit  f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8  upstream,
> I have to check if it's on my system.
>
> * Very interesting investigation about sudo:
>
> http://www.sudo.ws/repos/sudo/rev/e9dc28c7db60 this is as important
> as the bottleneck in rootwrap when you start having lots of interfaces.
> Good catch!,
>
> * To your question: my times are only from neutron-dhcp-agent &
> neutron-l3-agent start, to completion, system boot times are excluded from
> the
> measurement (That's <1min).
>
> * About the Linux networking folks not exposing API interfaces to avoid
> lock in, in the end they're already locked in with the cmd api interface,
> if they made an API at the same level, it shouldn't be that bad... but of
> course, it's not free...
>
>
> Joe Gordon,
>
> * yes, pypy start time is too slow, and I must definitely investigate
> about the RPython toolchain.
>
> * Ideally, I agree, that a an automated py->C solution would be
> the best from the openstack project point of view. Have you had any
> experience with that using such toolchain? Could you point me to some
> example?
>

Sorry I am afraid I don't have experience with this or any examples.


>
> * shedskin seems to do this kind of translation, for a limited python
> subset, which would mean rewriting rootwrap's python to accommodate
> such limitations.
>
>
RPython is a subset of Python so rewritting will be needed for pypy as well.


>
> If no tool offers the translation we need, or if the result is slow:
>
> I'm not against a rewrite of rootwrap in C/C++, if we have developers
> on the project, with C/C++ experience, specially related to security.
> I have such experience, and I'm sure there are more around (even if
> not all openstack developers talk C). But, that doesn't exclude that
> we maintain a rootwrap in python to foster innovation around the tool.
> (here I agree with Vishvananda Ishaya)
>
>
> Solly Ross,
>  I haven't tried cython, but I will check it in a few minutes.
>
>
> Iwamoto Toshihiro,
>
>  Thanks for pointing us to "ip netns exec" too, I wonder if that's
> releated to the iproute upstream change Rick Jones was talking about.
>
>
> Cheers,
> Miguel Ángel.
>
>
>
>
>
>
>
> On 03/06/2014 09:31 AM, Miguel Angel Ajo wrote:
>
>>
>> On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:
>>
>>> At Wed, 05 Mar 2014 15:42:54 +0100,
>>> Miguel Angel Ajo wrote:
>>>
 3) I also find 10 minutes a long time to setup 192 networks/basic tenant
 structures, I wonder if that time could be reduced by conversion
 of system process calls into system library calls (I know we don't have
 libraries for iproute, iptables?, and many other things... but it's a
 problem that's probably worth looking at.)

>>>
>>> Try benchmarking
>>>
>>> $ sudo ip netns exec qfoobar /bin/echo
>>>
>>
>> You're totally right, that takes the same time as rootwrap itself. It's
>> another point to think about from the performance point of view.
>>
>> An interesting read:
>> http://man7.org/linux/man-pages/man8/ip-netns.8.html
>>
>> ip netns does a lot of mounts around to simulate a normal environment,
>> where an netns-aware application could avoid all this.
>>
>>
>>> Network namespace switching costs almost as much as a rootwrap
>>> execution, IIRC.
>>>
>>> Execution coalesceing is not enough in this case and we would need to
>>> change how Neutron issues commands, IMO.
>>>
>>
>> Yes, one option could be to coalesce all calls that go into
>> a namespace into a shell script and run this in the
>> ootwrap > ip netns exec
>>
>> But we might find a mechanism to determine if some of the steps failed,
>> and what was the result / output, something like failing line + result
>> code. I'm not sure if we rely on stdout/stderr results at any time.
>>
>>
>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-07 Thread Joe Gordon
On Fri, Mar 7, 2014 at 1:26 AM, Qin Zhao  wrote:

> Hi Joe,
> Maybe my example is very rare. However, I think a new type of 'in-place'
> snapshot will have other advantages. For instance, the hypervisor can
> support to save memory content in snapshot file, so that user can revert
> his VM to running state. In this way, the user do not need to start each
> application again. Every thing is there. User can continue his work very
> easily. If the user spawn and boot a new VM, he will need to take a lot of
> time to resume his work. Does that make sense?
>

I am not sure I follow. I think the use case you have brought up can be
solved inside of the VM with something like
http://unionfs.filesystems.org/are a filesystem that supports
snapshotting.



>
>
> On Fri, Mar 7, 2014 at 2:20 PM, Joe Gordon  wrote:
>
>> On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
>> > Hi Joe,
>> > For example, I used to use a private cloud system, which will calculate
>> > charge bi-weekly. and it charging formula looks like "Total_charge =
>> > Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
>> > Volume_number*C4".  Those Instance/Image/Volume number are the number of
>> > those objects that user created within these two weeks. And it also has
>> > quota to limit total image size and total volume size. That formula is
>> not
>> > very exact, but you can see that it regards each of my 'create'
>> operation ass
>> > a 'ticket', and will charge all those tickets, plus the instance
>> duration
>>
>> Charging for creating a VM creation is not very cloud like.  Cloud
>> instances should be treated as ephemeral and something that you can
>> throw away and recreate at any time.  Additionally cloud should charge
>> on resources used (instance CPU hour, network load etc), and not API
>> calls (at least in any meaningful amount).
>>
>> > fee. In order to reduce the expense of my department, I am asked not to
>> > create instance very frequently, and not to create too many images and
>> > volume. The image quota is not very big. And I would never be permitted
>> to
>> > exceed the quota, since it request additional dollars.
>> >
>> >
>> > On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon 
>> wrote:
>> >>
>> >> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
>> >> > Hi Joe,
>> >> > If we assume the user is willing to create a new instance, the
>> workflow
>> >> > you
>> >> > are saying is exactly correct. However, what I am assuming is that
>> the
>> >> > user
>> >> > is NOT willing to create a new instance. If Nova can revert the
>> existing
>> >> > instance, instead of creating a new one, it will become the
>> alternative
>> >> > way
>> >> > utilized by those users who are not allowed to create a new instance.
>> >> > Both paths lead to the target. I think we can not assume all the
>> people
>> >> > should walk through path one and should not walk through path two.
>> Maybe
>> >> > creating new instance or adjusting the quota is very easy in your
>> point
>> >> > of
>> >> > view. However, the real use case is often limited by business
>> process.
>> >> > So I
>> >> > think we may need to consider that some users can not or are not
>> allowed
>> >> > to
>> >> > creating the new instance under specific circumstances.
>> >> >
>> >>
>> >> What sort of circumstances would prevent someone from deleting and
>> >> recreating an instance?
>> >>
>> >> >
>> >> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon 
>> >> > wrote:
>> >> >>
>> >> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao 
>> wrote:
>> >> >> > Hi Joe, my meaning is that cloud users may not hope to create new
>> >> >> > instances
>> >> >> > or new images, because those actions may require additional
>> approval
>> >> >> > and
>> >> >> > additional charging. Or, due to instance/image quota limits, they
>> can
>> >> >> > not do
>> >> >> > that. Anyway, from user's perspective, saving and reverting the
>> >> >> > existing
>> >> >> > instance will be preferred sometimes. Creating a new instance
>> will be
>> >> >> > another story.
>> >> >> >
>> >> >>
>> >> >> Are you saying some users may not be able to create an instance at
>> >> >> all? If so why not just control that via quotas.
>> >> >>
>> >> >> Assuming the user has the power to rights and quota to create one
>> >> >> instance and one snapshot, your proposed idea is only slightly
>> >> >> different then the current workflow.
>> >> >>
>> >> >> Currently one would:
>> >> >> 1) Create instance
>> >> >> 2) Snapshot instance
>> >> >> 3) Use instance / break instance
>> >> >> 4) delete instance
>> >> >> 5) boot new instance from snapshot
>> >> >> 6) goto step 3
>> >> >>
>> >> >> From what I gather you are saying that instead of 4/5 you want the
>> >> >> user to be able to just reboot the instance. I don't think such a
>> >> >> subtle change in behavior is worth a whole new API extension.
>> >> >>
>> >> >> >
>> >> >> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon > >
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Tue, Mar 4, 2014 at 1:06 AM,

Re: [openstack-dev] [Climate] New name voting results (round #2)

2014-03-07 Thread Sylvain Bauza
Galactic epic win !
Let's start calling our project now Blazar (Climate). Very impressive word
:-)


2014-03-07 20:13 GMT+01:00 Dina Belova :

> Hello everyone!
>
> I've closed round #2 of new name choosing voting for Climate:
>
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_e2f53ce1b331590f
>
> *Blazar* is the most popular variant, heh :)
>
> 1. *Blazar*  (Not defeated in any contest vs. another choice) 2. Forecast,
> loses to Blazar by 6-5 3. Prophecy, loses to Forecast by 6-34. Reserva,
> loses to Prophecy by 5-4 5. Cast, loses to Reserva by 7-4
> I remind you, that we needed new name because of some PyPi and readthedocs
> repos issues + trademark possible issues :)
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Vladimir Kozhukalov
Russell,

Great to hear you are going to move towards Pecan+WSME. Yesterday I had a
look at teeth projects. Next few days I am going to start contributing.
First of all, I think, we need to arrange all that stuff about pluggable
architecture. I've created a wiki page about Ironic python agent
https://wiki.openstack.org/wiki/Ironic-python-agent.

And the question about contributing. Have you managed to send pull request
to openstack-infra in order to move this project into github.com/stackforge?
Or we are supposed to arrange everything (werkzeug -> Pecan/WSME,
architectural questions) before we move this agent to stackforge?





Vladimir Kozhukalov


On Fri, Mar 7, 2014 at 8:53 PM, Russell Haering wrote:

> Vladmir,
>
> Hey, I'm on the team working on this agent, let me offer a little history.
> We were working on a system of our own for managing bare metal gear which
> we were calling "Teeth". The project was mostly composed of:
>
> 1. teeth-agent: an on-host provisioning agent
> 2. teeth-overlord: a centralized automation mechanism
>
> Plus a few other libraries (including teeth-rest, which contains some
> common code we factored out of the agent/overlord).
>
> A few weeks back we decided to shift our focus to using Ironic. At this
> point we have effectively abandoned teeth-overlord, and are instead
> focusing on upstream Ironic development, continued agent development and
> building an Ironic driver capable of talking to our agent.
>
> Over the last few days we've been removing non-OS-approved dependencies
> from our agent: I think teeth-rest (and werkzeug, which it depends on) will
> be the last to go when we replace it with Pecan+WSME sometime in the next
> few days.
>
> Thanks,
> Russell
>
>
> On Fri, Mar 7, 2014 at 8:26 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> As far as I understand, there are 4 projects which are connected with
>> this topic. Another two projects which were not mentioned by Devananda are
>> https://github.com/rackerlabs/teeth-rest
>> https://github.com/rackerlabs/teeth-overlord
>>
>> Vladimir Kozhukalov
>>
>>
>> On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen <
>> devananda@gmail.com> wrote:
>>
>>> All,
>>>
>>> The Ironic team has been discussing the need for a "deploy agent" since
>>> well before the last summit -- we even laid out a few blueprints along
>>> those lines. That work was deferred  and we have been using the same deploy
>>> ramdisk that nova-baremetal used, and we will continue to use that ramdisk
>>> for the PXE driver in the Icehouse release.
>>>
>>> That being the case, at the sprint this week, a team from Rackspace
>>> shared work they have been doing to create a more featureful hardware agent
>>> and an Ironic driver which utilizes that agent. Early drafts of that work
>>> can be found here:
>>>
>>> https://github.com/rackerlabs/teeth-agent
>>> https://github.com/rackerlabs/ironic-teeth-driver
>>>
>>> I've updated the original blueprint and assigned it to Josh. For
>>> reference:
>>>
>>> https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk
>>>
>>> I believe this agent falls within the scope of the baremetal
>>> provisioning program, and welcome their contributions and collaboration on
>>> this. To that effect, I have suggested that the code be moved to a new
>>> OpenStack project named "openstack/ironic-python-agent". This would follow
>>> an independent release cycle, and reuse some components of tripleo
>>> (os-*-config). To keep the collaborative momentup up, I would like this
>>> work to be done now (after all, it's not part of the Ironic repo or
>>> release). The new driver which will interface with that agent will need to
>>> stay on github -- or in a gerrit feature branch -- until Juno opens, at
>>> which point it should be proposed to Ironic.
>>>
>>> The agent architecture we discussed is roughly:
>>> - a pluggable JSON transport layer by which the Ironic driver will pass
>>> information to the ramdisk. Their initial implementation is a REST API.
>>> - a collection of hardware-specific utilities (python modules, bash
>>> scripts, what ever) which take JSON as input and perform specific actions
>>> (whether gathering data about the hardware or applying changes to it).
>>> - and an agent which routes the incoming JSON to the appropriate
>>> utility, and routes the response back via the transport layer.
>>>
>>>
>>> -Devananda
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__

  1   2   >