Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Wang, Shane
Lianhao Lu, Shuangtai Tian and I are also willing to join the team to 
contribute because we are also changing scheduler, but it seems the team is 
full. You can put us to the backup list.

Thanks.
--
Shane

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Friday, November 22, 2013 4:59 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest 
proposal for an external scheduler in our lifetime

https://etherpad.openstack.org/p/icehouse-external-scheduler

I'm looking for 4-5 folk who have:
 - modest Nova skills
 - time to follow a fairly mechanical (but careful and detailed work
needed) plan to break the status quo around scheduler extraction

And of course, discussion galore about the idea :)

Cheers,
Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cloning vs copying images

2013-12-02 Thread Avishay Traeger
Dmitry,
You are correct.  I made the same comment on the review before seeing this 
thread.  Let's see how both patches turn out and we'll choose one. :)

Thanks,
Avishay



From:   Dmitry Borodaenko 
To: openstack-dev@lists.openstack.org, 
Date:   12/02/2013 09:32 PM
Subject:[openstack-dev] [Cinder] Cloning vs copying images



Hi OpenStack, particularly Cinder backend developers,

Please consider the following two competing fixes for the same problem:

https://review.openstack.org/#/c/58870/
https://review.openstack.org/#/c/58893/

The problem being fixed is that some backends, specifically Ceph RBD,
can only boot from volumes created from images in a certain format, in
RBD's case, RAW. When an image in a different format gets cloned into
a volume, it cannot be booted from. Obvious solution is to refuse
clone operation and copy/convert the image instead.

And now the principal question: is it safe to assume that this
restriction applies to all backends? Should the fix enforce copy of
non-RAW images for all backends? Or should the decision whether to
clone or copy the image be made in each backend?

The first fix puts this logic into the RBD backend, and makes changes
necessary for all other backends to have enough information to make a
similar decision if necessary. The problem with this approach is that
it's relatively intrusive, because driver clone_image() method
signature has to be changed.

The second fix has significantly less code changes, but it does
prevent cloning non-RAW images for all backends. I am not sure if this
is a real problem or not.

Can anyone point at a backend that can boot from a volume cloned from
a non-RAW image? I can think of one candidate: GPFS is a file-based
backend, while GPFS has a file clone operation. Is GPFS backend able
to boot from, say, a QCOW2 volume?

Thanks,

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-02 Thread Vui Chiap Lam
Hi Daniel,

I too found the original bp a little hard to follow, so thanks for
writing up the wiki! I see that the wiki is now linked to the BP, 
which is great as well.

The ability to express CPU topology constraints for the guests
has real-world use, and several drivers, including VMware, can definitely 
benefit from it.

If I understand correctly, in addition to being an elaboration of the
BP text, the wiki also adds the following:

1. Instead of returning the besting matching (num_sockets (S),
   cores_per_socket (C), threads_per_core (T)) tuple,  all applicable
   (S,C,T) tuples are returned, sorted by S then C then T.
2. A mandatory topology can be provided in the topology computation.

I like 2. because there are multiple reasons why all of a hypervisor's
CPU resources cannot be allocated to a single virtual machine. 
Given that the mandatory (I prefer maximal) topology is probably fixed
per hypervisor, I wonder this information should also be used in
scheduling time to eliminate incompatible hosts outright.  

As for 1. because of the order of precendence of the fields in the
(S,C,T) tuple, I am not sure how the preferred_topology comes into
play. Is it meant to help favor alternative values of S?

Also it might be good to describe a case where returning a list of
(S,C,T) instead of best-match is necessary. It seems deciding what to
pick other that the first item in the list requires logic similar to
that used to arrive at the list in the first place.

Cheers,
Vui

- Original Message -
| From: "Daniel P. Berrange" 
| To: openstack-dev@lists.openstack.org
| Sent: Monday, December 2, 2013 7:43:58 AM
| Subject: Re: [openstack-dev] [Nova] Blueprint: standard specification of 
guest CPU topology
| 
| On Tue, Nov 19, 2013 at 12:15:51PM +, Daniel P. Berrange wrote:
| > For attention of maintainers of Nova virt drivers
| 
| Anyone from Hyper-V or VMWare drivers wish to comment on this
| proposal
| 
| 
| > A while back there was a bug requesting the ability to set the CPU
| > topology (sockets/cores/threads) for guests explicitly
| > 
| >https://bugs.launchpad.net/nova/+bug/1199019
| > 
| > I countered that setting explicit topology doesn't play well with
| > booting images with a variety of flavours with differing vCPU counts.
| > 
| > This led to the following change which used an image property to
| > express maximum constraints on CPU topology (max-sockets/max-cores/
| > max-threads) which the libvirt driver will use to figure out the
| > actual topology (sockets/cores/threads)
| > 
| >   https://review.openstack.org/#/c/56510/
| > 
| > I believe this is a prime example of something we must co-ordinate
| > across virt drivers to maximise happiness of our users.
| > 
| > There's a blueprint but I find the description rather hard to
| > follow
| > 
| >   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
| > 
| > So I've created a standalone wiki page which I hope describes the
| > idea more clearly
| > 
| >   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
| > 
| > Launchpad doesn't let me link the URL to the blueprint since I'm not
| > the blurprint creator :-(
| > 
| > Anyway this mail is to solicit input on the proposed standard way to
| > express this which is hypervisor portable and the addition of some
| > shared code for doing the calculations which virt driver impls can
| > just all into rather than re-inventing
| > 
| > I'm looking for buy-in to the idea from the maintainers of each
| > virt driver that this conceptual approach works for them, before
| > we go merging anything with the specific impl for libvirt.
| 
| 
| Daniel
| --
| |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
| |: http://libvirt.org  -o- http://virt-manager.org :|
| |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
| |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
| 
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev] How to modify a bug across multiple repo?

2013-12-02 Thread wu jiang
Hi all,

Recently, I found a bug at API layer in Cinder, but the modifications
relate to CinderClient & Tempest.
So, I'm confused how to commit it. Can 'git --dependence' cross different
Repo?

Any help would be much appreciated.

Regards,
wingwj
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread John Wood
Hello folks,

I've created a blueprint to move over to oslo.messaging per Jarret's comments 
below: https://blueprints.launchpad.net/barbican

I'd also note that out of the box barbican does not use celery in order to 
simplify demo/standalone deployments...asynchronous calls simply invoke their 
worker tasks directly. 

Thanks,
John

From: cloudk...@googlegroups.com [cloudk...@googlegroups.com] on behalf of 
Jarret Raim [jarret.r...@rackspace.com]
Sent: Monday, December 02, 2013 9:16 PM
To: Fox, Kevin M; OpenStack Development Mailing List (not for usage questions); 
Russell Bryant
Cc: openstack...@lists.openstack.org; cloudkeep@googlegroups. com; 
barbi...@lists.rackspace.com
Subject: RE: [openstack-dev] [openstack-tc] Incubation Request for Barbican

> I've been anxious to try out Barbican, but haven't had quite enough time
to
> try it yet. But finding out it won't work with Qpid makes it unworkable
for us
> at the moment. I think a large swath of the OpenStack community won't be
> able to use it in this form too.

As mentioned in the other thread, Barbican will be looking at oslo messaging
for Icehouse.

In the meantime, you can configure celery to use a pass-through (e.g. no
queue needed). This is the configuration we use for dev / testing. The
documentation for that can be found in the celery docs here:
http://docs.celeryproject.org/en/latest/userguide/


Jarret

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-02 Thread Adrian Otto
Sorry, I changed the link. We originally started with hyphenated noun-verbs but 
switched to the current proposal upon receipt of advice that it would be more 
compatible with the next version of the cliff based CLI for OpenStack. If I 
remember correctly this advice came from Doug Hellman.

--
Adrian

PS: Sorry for top posting. This mail reader offers me no choice.


 Original message 
From: Russell Bryant
Date:12/02/2013 6:23 PM (GMT-08:00)
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] CLI minimal implementation

On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
> I have created a child blueprint to define scope for the minimal 
> implementation of the CLI to consider for milestone 1.
> https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementation

This link doesn't work.  The right one is:

https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli

> Spec for the minimal CLI @ 
> https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
> Etherpad for discussion notes: https://etherpad.openstack.org/p/MinimalCLI


--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim
> I've been anxious to try out Barbican, but haven't had quite enough time
to
> try it yet. But finding out it won't work with Qpid makes it unworkable
for us
> at the moment. I think a large swath of the OpenStack community won't be
> able to use it in this form too.

As mentioned in the other thread, Barbican will be looking at oslo messaging
for Icehouse.

In the meantime, you can configure celery to use a pass-through (e.g. no
queue needed). This is the configuration we use for dev / testing. The
documentation for that can be found in the celery docs here:
http://docs.celeryproject.org/en/latest/userguide/


Jarret


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group agenda 12/3

2013-12-02 Thread Dugger, Donald D
1)  memcached based scheduler

2)  Scheduler as a Service (recent email activity)

3)  Instance groups

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim
> There are two big parts to this, I think.  One is techincal - a significant 
> portion
> of OpenStack deployments will not work with this because Celery does not
> work with their deployed messaging architecture.
>  See another reply in this thread for an example of someone that sees the
> inability to use Qpid as a roadblock for an example.  This is solvable, but 
> not
> quickly.
>
> The other is somewhat technical, but also a community issue.  Monty
> articulated this well in another reply.  Barbican has made a conflicting 
> library
> choice with what every other project using messaging is using.
> With the number of projects we have, it is in our best interest to strive 
> for
> consistency where we can.  Differences should not be arbitrary.  The
> differences should only be where an exception is well justified.  I don't 
> see
> that as being the case here.  Should everyone using oslo.messaging (or its
> predecessor rpc in oslo-incubator) be using Celery?  Maybe.  I don't know,
> but that's the question at hand.  Ideally this would have come up with a 
> more
> broad audience sooner.  If it did, I'm sorry I missed it.

I understand the concern here and I'm happy to have Barbican look at using 
oslo.messaging during the Icehouse cycle.

I am a bit surprised at the somewhat strong reactions to our choice. When we 
created Barbican, we looked at the messaging frameworks out there for use. At 
the time, oslo.messaging was not packaged, not documented, not tested, had no 
track record and an unknown level of community support.

Celery is a battle-tested library that is widely deployed with a good track 
record, strong community and decent documentation. We made our choice based on 
those factors, just as the same as we would for any library inclusion.

As celery has met our needs up to this point, we saw no reason to revisit the 
decision until now. In that time oslo.messaging  has moved to a separate repo. 
It still has little to no documentation, but the packaging and maintenance 
issues seem to be on the way to being sorted.

So in short, in celery we get a reliable library with good docs that is battle 
tested, but is limited to the transports supported by Kombu. Both celery and 
Kombu are extendable and have many backends including AMQP, Redis, Beanstalk, 
Amazon SQS, CouchDB, MongoDB, ZeroMQ, ZooKeeper, SoftLayer MQ and Pyro.

Oslo.messaging seems to have good support in OpenStack, but still lacks 
documentation and packaging (though some of that is being sorted out now). It 
offers support for qpid which celery seems to lack. It also offers a common 
place for message signing and some other nice to have features for OpenStack.

Based on the commonality in OpenStack (and the lack of anyone else using 
Celery), I think looking to move to oslo.messaging is a good goal. This will 
take some time, but I think doing it by Icehouse seems reasonable. I think 
that is what you and Monty are asking for?

I have added the task to our list on 
https://wiki.openstack.org/wiki/Barbican/Incubation.


Thanks again for all the eyeballs our on application.


Jarret



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-12-02 Thread Adam Young

On 11/29/2013 10:06 AM, Mark McLoughlin wrote:

Hey

Anyone got an update on this?

The keystone blueprint for KDS was marked approved on Tuesday:

   https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

and a new keystone review was added on Sunday, but it must be a draft
since I can't access it:

https://review.openstack.org/58124

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It was an approach to deploying KDS inside of Keystone but as a separate 
server/running on a different port.  Jamie Lennox was working at the 
same time and has a more mature patch.  His review will be posted 
shortly, and mine can be abandoned.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 04:49 PM, Vishvananda Ishaya wrote:
> That is a good point. If the forklift is still talking to nova's
> db then it would be significantly less duplication and i could see
> doing it in the reverse order. The no-db-stuff should be done
> before trying to implement cinder support so we don't have the
> messiness of the scheduler talking to multiple db apis.

+1

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-02 Thread Russell Bryant
On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
> I have created a child blueprint to define scope for the minimal 
> implementation of the CLI to consider for milestone 1.
> https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementation

This link doesn't work.  The right one is:

https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli

> Spec for the minimal CLI @ 
> https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
> Etherpad for discussion notes: https://etherpad.openstack.org/p/MinimalCLI


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-02 Thread Russell Bryant
On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
> I have created a child blueprint to define scope for the minimal 
> implementation of the CLI to consider for milestone 1.
> https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementation
> 
> Spec for the minimal CLI @ 
> https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
> Etherpad for discussion notes: https://etherpad.openstack.org/p/MinimalCLI
> 
> Would look for feedback on the ML, etherpad and discuss more in the weekly 
> IRC meeting tomorrow.

What is this R1.N syntax?  How does it relate to development milestones?
 Does R1 mean a requirement for milestone-1?


For consistency, I would use commands like:

   solum app-create
   solum app-delete
   solum assembly-create
   solum assembly-delete

instead of adding a space in between:

   solum app create

to be more consistent with other clients, like:

   nova flavor-create
   nova flavor-delete
   glance image-create
   glance image-delete


I would make required arguments positional arguments.  So, instead of:

   solum app-create --plan=planname

do:

   solum app-create 


Lastly, everywhere you have a name, I would use a UUID.  Names shouldn't
have to be globally unique (because of multi-tenancy).  UUIDs should
always work, but you can support a name in the client code as a friendly
shortcut, but it should fail if a unique result can not be resolved from
the name.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 06:46 PM, Dolph Mathews wrote:
> 
> 
> 
> On Mon, Dec 2, 2013 at 11:55 AM, Russell Bryant  > wrote:
> 
> On 12/02/2013 12:46 PM, Monty Taylor wrote:
> > On 12/02/2013 11:53 AM, Russell Bryant wrote:
> >>>  * Scope
> >>>  ** Project must have a clear and defined scope
> >>
> >> This is missing
> >>
> >>>  ** Project should not inadvertently duplicate functionality
> present in other
> >>> OpenStack projects. If they do, they should have a clear
> plan and timeframe
> >>> to prevent long-term scope duplication.
> >>>  ** Project should leverage existing functionality in other
> OpenStack projects
> >>> as much as possible
> >>
> >> I'm going to hold off on diving into this too far until the scope is
> >> clarified.
> >
> > I'm not.
> >
> > *snip*
> >
> 
> Ok, I can't help it now.
> 
> >>
> >> The list looks reasonable right now.  Barbican should put
> migrating to
> >> oslo.messaging on the Icehouse roadmap though.
> >
> > *snip*
> 
> Yeahhh ... I looked and even though rpc and notifier are imported, they
> do not appear to be used at all.
> 
> >>
> >>
> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
> >>
> >> It looks like the only item here not in the global requirements is
> >> Celery, which is licensed under a 3-clause BSD license.
> >
> > I'd like to address the use of Celery.
> >
> > WTF
> >
> > Barbican has been around for 9 months, which means that it does not
> > predate the work that has become oslo.messaging. It doesn't even
> try. It
> > uses a completely different thing.
> >
> > The use of celery needs to be replaced with oslo. Full stop. I do not
> > believe it makes any sense to spend further time considering a project
> > that's divergent on such a core piece. Which is a shame - because I
> > think that Barbican is important and fills an important need and I
> want
> > it to be in. BUT - We don't get to end-run around OpenStack project
> > choices by making a new project on the side and then submitting it for
> > incubation. It's going to be a pile of suck to fix this I'm sure, and
> > I'm sure that it's going to delay getting actually important stuff
> done
> > - but we deal with too much crazy as it is to pull in a non-oslo
> > messaging and event substrata.
> > 
> 
> Yeah, I'm afraid I agree with Monty here.  I didn't really address this
> because I was trying to just do a first pass and not go too far into the
> tech bits.
> 
> I think such a big divergence is going to be a hard sell for a number of
> reasons.  It's a significant dependency that I don't think is justified.
>  Further, it won't work in all of the same environments that OpenStack
> works in today.  You can't use Celery with all of the same messaging
> transports as oslo.messaging (or the older rpc lib).  One example is
> Qpid.
> 
> 
> I feel like I'm trying to read past the rant :) so I'd like to stop an
> ask for clarification on the exact argument being made. Is the *only*
> reason that celery should not be used in openstack because it is
> incapable of being deployed against amqp 1.0 brokers (i.e. qpid).
> 
> I'm really trying to understand if the actual objection is over the use
> (or not) of oslo (which seems like something the TC should express an
> opinion on, if that's the case), but rather about limiting OpenStack's
> deployment options.

There are two big parts to this, I think.  One is techincal - a
significant portion of OpenStack deployments will not work with this
because Celery does not work with their deployed messaging architecture.
 See another reply in this thread for an example of someone that sees
the inability to use Qpid as a roadblock for an example.  This is
solvable, but not quickly.

The other is somewhat technical, but also a community issue.  Monty
articulated this well in another reply.  Barbican has made a conflicting
library choice with what every other project using messaging is using.
With the number of projects we have, it is in our best interest to
strive for consistency where we can.  Differences should not be
arbitrary.  The differences should only be where an exception is well
justified.  I don't see that as being the case here.  Should everyone
using oslo.messaging (or its predecessor rpc in oslo-incubator) be using
Celery?  Maybe.  I don't know, but that's the question at hand.  Ideally
this would have come up with a more broad audience sooner.  If it did,
I'm sorry I missed it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Fox, Kevin M
I've been anxious to try out Barbican, but haven't had quite enough time to try 
it yet. But finding out it won't work with Qpid makes it unworkable for us at 
the moment. I think a large swath of the OpenStack community won't be able to 
use it in this form too.

Thanks,
Kevin


From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Monday, December 02, 2013 3:46 PM
To: Russell Bryant
Cc: openstack...@lists.openstack.org; OpenStack Development Mailing List (not 
for usage questions); cloudkeep@googlegroups. com; barbi...@lists.rackspace.com
Subject: Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

On Mon, Dec 2, 2013 at 11:55 AM, Russell Bryant 
mailto:rbry...@redhat.com>> wrote:
On 12/02/2013 12:46 PM, Monty Taylor wrote:
> On 12/02/2013 11:53 AM, Russell Bryant wrote:
>>>  * Scope
>>>  ** Project must have a clear and defined scope
>>
>> This is missing
>>
>>>  ** Project should not inadvertently duplicate functionality present in 
>>> other
>>> OpenStack projects. If they do, they should have a clear plan and 
>>> timeframe
>>> to prevent long-term scope duplication.
>>>  ** Project should leverage existing functionality in other OpenStack 
>>> projects
>>> as much as possible
>>
>> I'm going to hold off on diving into this too far until the scope is
>> clarified.
>
> I'm not.
>
> *snip*
>

Ok, I can't help it now.

>>
>> The list looks reasonable right now.  Barbican should put migrating to
>> oslo.messaging on the Icehouse roadmap though.
>
> *snip*

Yeahhh ... I looked and even though rpc and notifier are imported, they
do not appear to be used at all.

>>
>> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
>>
>> It looks like the only item here not in the global requirements is
>> Celery, which is licensed under a 3-clause BSD license.
>
> I'd like to address the use of Celery.
>
> WTF
>
> Barbican has been around for 9 months, which means that it does not
> predate the work that has become oslo.messaging. It doesn't even try. It
> uses a completely different thing.
>
> The use of celery needs to be replaced with oslo. Full stop. I do not
> believe it makes any sense to spend further time considering a project
> that's divergent on such a core piece. Which is a shame - because I
> think that Barbican is important and fills an important need and I want
> it to be in. BUT - We don't get to end-run around OpenStack project
> choices by making a new project on the side and then submitting it for
> incubation. It's going to be a pile of suck to fix this I'm sure, and
> I'm sure that it's going to delay getting actually important stuff done
> - but we deal with too much crazy as it is to pull in a non-oslo
> messaging and event substrata.
>
Yeah, I'm afraid I agree with Monty here.  I didn't really address this
because I was trying to just do a first pass and not go too far into the
tech bits.

I think such a big divergence is going to be a hard sell for a number of
reasons.  It's a significant dependency that I don't think is justified.
 Further, it won't work in all of the same environments that OpenStack
works in today.  You can't use Celery with all of the same messaging
transports as oslo.messaging (or the older rpc lib).  One example is Qpid.

I feel like I'm trying to read past the rant :) so I'd like to stop an ask for 
clarification on the exact argument being made. Is the *only* reason that 
celery should not be used in openstack because it is incapable of being 
deployed against amqp 1.0 brokers (i.e. qpid).

I'm really trying to understand if the actual objection is over the use (or 
not) of oslo (which seems like something the TC should express an opinion on, 
if that's the case), but rather about limiting OpenStack's deployment options.


--
Russell Bryant

___
OpenStack-TC mailing list
openstack...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc

--

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Package from Debian for Devstack?

2013-12-02 Thread Chuck Short
On Mon, Dec 2, 2013 at 6:21 PM, Sean Dague  wrote:

> On 12/02/2013 06:15 PM, Adam Young wrote:
> > In order to provide Devstack a better certificate management example, we
> > want to make devstack capable of calling Certmaster.
> >
> > This package is in Debian, but not in Ubuntu.
> >
> > For example
> > http://packages.debian.org/experimental/certmaster
> >
> > How does one go about installing a package like this for devstack? Do we
> > need to get it into the underlying distro, or is a cross distro install
> > acceptable?
> >
> > I've tested it by hand and it install and works fine when
> > pre-installed.  Its just a distribution problem.
>
> That means it works, right now. But not being in the distro means there
> are no guaruntees of it working in the future.
>
> Honestly, at this point I'd -1 an add like this. I think pulling a
> package out of experimental is not really inspiring confidence. I'd
> suggest looking for a package available in Ubuntu natively.
>
> -Sean


Hi,

So the way that the package syncs works from Debian to Ubuntu is that a
package usually moves from Experimental->Unstable->Testing, Ubuntu usually
picks the package up in Unstable and gets placed in Universe (usually).
However that doesnt mean that the package will hit the cloud archive as
well.

Regards
chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Javascript testing framework

2013-12-02 Thread Maxime Vidori
Hi!

In order to improve the javascript quality of Horizon, we have to change the 
testing framework of the client-side. Qunit is a good tool for simple tests, 
but the integration of Angular need some powerful features which are not 
present in Qunit. So, I have made a little POC with the javascript testing 
library Jasmine, which is the one given as an example into the Angularjs 
documentation. I have also rewrite a Qunit test in Jasmine in order to show 
that the change is quite easy to make.

Feel free to comment in this mailing list the pros and cons of this new tool, 
and to checkout my code for reviewing it. I have also made an helper for quick 
development of Jasmine tests through Selenium.

To finish, I need your opinion for a new command line in run_tests.sh. I think 
we should create a run_tests.sh --runserver-test target which will allow 
developers to see all the javascript test page. This new command line will 
allow people to avoid the use of the command line for running Selenium tests, 
and allow them to view their tests in a comfortable html interface. It could be 
interesting for the development of tests, this command line will only be used 
for development purpose.

Waiting for your feedbacks!

Here is a link to the Jasmine POC: https://review.openstack.org/#/c/59580/

Maxime Vidori

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-02 Thread Christopher Yeoh
On Tue, Dec 3, 2013 at 9:43 AM, Kenichi Oomichi
wrote:

>
> Hi Sean, David, Marc
>
> I have one question about negative tests.
> Now we are in moratorium on new negative tests in Tempest:
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-November/018748.html
>
> Is it OK to consider this kind of patch(separating negative tests from
> positive test file, without any additional negative tests) as an exception?
>
>
I don't have a strong opinion on this, but I think it's ok given it will
make the eventual removal of
hand coded negative tests in the future easier even though it costs us a
bit of churn now.

Chris


>
> Thanks
> Ken'ichi Ohmichi
>
> ---
>
> > -Original Message-
> > From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
> > Sent: Monday, December 02, 2013 8:33 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [tempest] list of negative tests that need
> to be separated from other tests.
> >
> > Thanks Ken'ichi. I added my name to a couple of them in that list.
> >
> > Adalberto Medeiros
> > Linux Technology Center
> > Openstack and Cloud Development
> > IBM Brazil
> > Email: adal...@linux.vnet.ibm.com
> >
> > On Mon 02 Dec 2013 07:36:38 AM BRST, Kenichi Oomichi wrote:
> > >
> > > Hi Adalberto,
> > >
> > >> -Original Message-
> > >> From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
> > >> Sent: Saturday, November 30, 2013 11:29 PM
> > >> To: OpenStack Development Mailing List
> > >> Subject: [openstack-dev] [tempest] list of negative tests that need
> to be separated from other tests.
> > >>
> > >> Hi!
> > >>
> > >> I understand that one action toward negative tests, even before
> > >> implementing the automatic schema generation, is to move them to their
> > >> own file (.py), thus separating them from the 'positive' tests. (See
> > >> patch https://review.openstack.org/#/c/56807/ as an example).
> > >>
> > >> In order to do so, I've got a list of testcases that still have both
> > >> negative and positive tests together, and listed them in the following
> > >> etherpad link:
> https://etherpad.openstack.org/p/bp_negative_tests_list
> > >>
> > >> The idea here is to have patches for each file until we get all the
> > >> negative tests in their own files. I also linked the etherpad to the
> > >> specific blueprint created by Marc for negative tests in icehouse
> > >> (https://blueprints.launchpad.net/tempest/+spec/negative-tests ).
> > >>
> > >> Please, send any comments and whether you think this is the right
> > >> approach to keep track on that task.
> > >
> > > We have already the same etherpad, and we are working on it.
> > > Please check the following:
> > > https://etherpad.openstack.org/p/TempestTestDevelopment
> > >
> > >
> > > Thanks
> > > Ken'ichi Ohmichi
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] CLI minimal implementation

2013-12-02 Thread Roshan Agrawal
I have created a child blueprint to define scope for the minimal implementation 
of the CLI to consider for milestone 1.
https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementation

Spec for the minimal CLI @ 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
Etherpad for discussion notes: https://etherpad.openstack.org/p/MinimalCLI

Would look for feedback on the ML, etherpad and discuss more in the weekly IRC 
meeting tomorrow.

Thanks & Regards,
Roshan Agrawal
Direct:    512.874.1278
Mobile:  512.354.5253
roshan.agra...@rackspace.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-02 Thread Nachi Ueno
Hi Vijay

I was thinking about we should store this kind of information on the keystone.
However, I changed my mind after checking keystone API.
The keystone api is very generic, so we can't provider application
specific helper method and validations on that.

The form of certificate is different between applications.
so my current idea is to have a independent resource for certificates
 for each applications.

Best
Nachi





2013/12/2 Vijay Venkatachalam :
>
> LBaaS enthusiasts: Your vote on the revised model for SSL Termination?
>
> Here is a comparison between the original and revised model for SSL 
> Termination:
>
> ***
> Original Basic Model that was proposed in summit
> ***
> * Certificate parameters introduced as part of VIP resource.
> * This model is for basic config and there will be a model introduced in 
> future for detailed use case.
> * Each certificate is created for one and only one VIP.
> * Certificate params not stored in DB and sent directly to loadbalancer.
> * In case of failures, there is no way to restart the operation from details 
> stored in DB.
> ***
> Revised New Model
> ***
> * Certificate parameters will be part of an independent certificate resource. 
> A first-class citizen handled by LBaaS plugin.
> * It is a forwarding looking model and aligns with AWS for uploading server 
> certificates.
> * A certificate can be reused in many VIPs.
> * Certificate params stored in DB.
> * In case of failures, parameters stored in DB will be used to restore the 
> system.
>
> A more detailed comparison can be viewed in the following link
>  
> https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVeZISh07iGs/edit?usp=sharing
>
> Thanks,
> Vijay V.
>
>
>> -Original Message-
>> From: Vijay Venkatachalam
>> Sent: Friday, November 29, 2013 2:18 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as
>> first level citizen - SSL Termination
>>
>>
>> To summarize:
>> Certificate will be a first level citizen which can be reused and For 
>> certificate
>> management nothing sophisticated is required.
>>
>> Can you please Vote (+1, -1)?
>>
>> We can move on if there is consensus around this.
>>
>> > -Original Message-
>> > From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
>> > Sent: Wednesday, November 20, 2013 3:01 PM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
>> >
>> > Hi,
>> >
>> > On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
>> > > Hi,
>> > >
>> > >
>> > >
>> > > Evgeny has outlined the wiki for the proposed change at:
>> > > https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
>> > > with what was discussed during the summit.
>> > >
>> > > The
>> > >
>> >
>> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
>> > YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
>> > >
>> > >
>> > >
>> > > What would be the benefit of having a certificate that must be
>> > > connected to VIP vs. embedding it in the VIP?
>> >
>> > You could reuse the same certificate for multiple loadbalancer VIPs.
>> > This is a fairly common pattern - we have a dev wildcard cert that is
>> > self- signed, and is used for lots of VIPs.
>> >
>> > > When we get a system that can store certificates (ex: Barbican), we
>> > > will add support to it in the LBaaS model.
>> >
>> > It probably doesn't need anything that complicated, does it?
>> >
>> > Cheers,
>> > --
>> > Stephen Gran
>> > Senior Systems Integrator - The Guardian
>> >
>> > Please consider the environment before printing this email.
>> > --
>> > Visit theguardian.com
>> >
>> > On your mobile, download the Guardian iPhone app
>> > theguardian.com/iphone and our iPad edition theguardian.com/iPad Save
>> > up to 33% by subscribing to the Guardian and Observer - choose the
>> > papers you want and get full digital access.
>> > Visit subscribe.theguardian.com
>> >
>> > This e-mail and all attachments are confidential and may also be
>> > privileged. If you are not the named recipient, please notify the
>> > sender and delete the e- mail and all attachments immediately.
>> > Do not disclose the contents to another person. You may not use the
>> > information for any purpose, or store, or copy, it in any way.
>> >
>> > Guardian News & Media Limited is not liable for any computer viruses
>> > or other material transmitted with or as part of this e-mail. You
>> > should employ virus checking software.
>> >
>> > Guardian News & Media Limited
>> >
>> > A member of Guardian Media Group plc
>> > Registered Office
>> > PO Box 68164
>> > Kings Place
>> > 90 York Way
>> > London
>> > N1P 2AP
>> >
>> > Registered in England Number 908396
>> >
>> > 

Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Dolph Mathews
On Mon, Dec 2, 2013 at 11:55 AM, Russell Bryant  wrote:

> On 12/02/2013 12:46 PM, Monty Taylor wrote:
> > On 12/02/2013 11:53 AM, Russell Bryant wrote:
> >>>  * Scope
> >>>  ** Project must have a clear and defined scope
> >>
> >> This is missing
> >>
> >>>  ** Project should not inadvertently duplicate functionality present
> in other
> >>> OpenStack projects. If they do, they should have a clear plan and
> timeframe
> >>> to prevent long-term scope duplication.
> >>>  ** Project should leverage existing functionality in other OpenStack
> projects
> >>> as much as possible
> >>
> >> I'm going to hold off on diving into this too far until the scope is
> >> clarified.
> >
> > I'm not.
> >
> > *snip*
> >
>
> Ok, I can't help it now.
>
> >>
> >> The list looks reasonable right now.  Barbican should put migrating to
> >> oslo.messaging on the Icehouse roadmap though.
> >
> > *snip*
>
> Yeahhh ... I looked and even though rpc and notifier are imported, they
> do not appear to be used at all.
>
> >>
> >>
> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
> >>
> >> It looks like the only item here not in the global requirements is
> >> Celery, which is licensed under a 3-clause BSD license.
> >
> > I'd like to address the use of Celery.
> >
> > WTF
> >
> > Barbican has been around for 9 months, which means that it does not
> > predate the work that has become oslo.messaging. It doesn't even try. It
> > uses a completely different thing.
> >
> > The use of celery needs to be replaced with oslo. Full stop. I do not
> > believe it makes any sense to spend further time considering a project
> > that's divergent on such a core piece. Which is a shame - because I
> > think that Barbican is important and fills an important need and I want
> > it to be in. BUT - We don't get to end-run around OpenStack project
> > choices by making a new project on the side and then submitting it for
> > incubation. It's going to be a pile of suck to fix this I'm sure, and
> > I'm sure that it's going to delay getting actually important stuff done
> > - but we deal with too much crazy as it is to pull in a non-oslo
> > messaging and event substrata.
> >
>
Yeah, I'm afraid I agree with Monty here.  I didn't really address this
> because I was trying to just do a first pass and not go too far into the
> tech bits.
>
> I think such a big divergence is going to be a hard sell for a number of
> reasons.  It's a significant dependency that I don't think is justified.
>  Further, it won't work in all of the same environments that OpenStack
> works in today.  You can't use Celery with all of the same messaging
> transports as oslo.messaging (or the older rpc lib).  One example is Qpid.


I feel like I'm trying to read past the rant :) so I'd like to stop an ask
for clarification on the exact argument being made. Is the *only* reason
that celery should not be used in openstack because it is incapable of
being deployed against amqp 1.0 brokers (i.e. qpid).

I'm really trying to understand if the actual objection is over the use (or
not) of oslo (which seems like something the TC should express an opinion
on, if that's the case), but rather about limiting OpenStack's deployment
options.


>
> --
> Russell Bryant
>
> ___
> OpenStack-TC mailing list
> openstack...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
>

-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-02 Thread Kenichi Oomichi

Hi Sean, David, Marc

I have one question about negative tests.
Now we are in moratorium on new negative tests in Tempest:
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018748.html

Is it OK to consider this kind of patch(separating negative tests from
positive test file, without any additional negative tests) as an exception?


Thanks
Ken'ichi Ohmichi

---

> -Original Message-
> From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
> Sent: Monday, December 02, 2013 8:33 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tempest] list of negative tests that need to be 
> separated from other tests.
> 
> Thanks Ken'ichi. I added my name to a couple of them in that list.
> 
> Adalberto Medeiros
> Linux Technology Center
> Openstack and Cloud Development
> IBM Brazil
> Email: adal...@linux.vnet.ibm.com
> 
> On Mon 02 Dec 2013 07:36:38 AM BRST, Kenichi Oomichi wrote:
> >
> > Hi Adalberto,
> >
> >> -Original Message-
> >> From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
> >> Sent: Saturday, November 30, 2013 11:29 PM
> >> To: OpenStack Development Mailing List
> >> Subject: [openstack-dev] [tempest] list of negative tests that need to be 
> >> separated from other tests.
> >>
> >> Hi!
> >>
> >> I understand that one action toward negative tests, even before
> >> implementing the automatic schema generation, is to move them to their
> >> own file (.py), thus separating them from the 'positive' tests. (See
> >> patch https://review.openstack.org/#/c/56807/ as an example).
> >>
> >> In order to do so, I've got a list of testcases that still have both
> >> negative and positive tests together, and listed them in the following
> >> etherpad link: https://etherpad.openstack.org/p/bp_negative_tests_list
> >>
> >> The idea here is to have patches for each file until we get all the
> >> negative tests in their own files. I also linked the etherpad to the
> >> specific blueprint created by Marc for negative tests in icehouse
> >> (https://blueprints.launchpad.net/tempest/+spec/negative-tests ).
> >>
> >> Please, send any comments and whether you think this is the right
> >> approach to keep track on that task.
> >
> > We have already the same etherpad, and we are working on it.
> > Please check the following:
> > https://etherpad.openstack.org/p/TempestTestDevelopment
> >
> >
> > Thanks
> > Ken'ichi Ohmichi
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-02 Thread Tiwari, Arvind
Hi Adam and David,

Thank you so much for all the great comments, seems we are making good progress.

I have replied to your comments and also added some to support my proposal

https://etherpad.openstack.org/p/service-scoped-role-definition

David, I like your suggestion for role-def scoping which can fit in my Plan B 
and I think Adam is cool with plan B.

Please let me know if David's proposal for role-def scoping is cool for 
everybody?


Thanks,
Arvind

-Original Message-
From: Adam Young [mailto:ayo...@redhat.com] 
Sent: Wednesday, November 27, 2013 8:44 AM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
Subject: Re: [openstack-dev] [keystone] Service scoped role definition



On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
> Hi Adam,
>
> Based on our discussion over IRC, I have updated the below etherpad with 
> proposal for nested role definition

Updated.  I made my changes Green.  It isn't easy being green.

>
> https://etherpad.openstack.org/p/service-scoped-role-definition
>
> Please take a look @ "Proposal (Ayoung) - Nested role definitions", I am 
> sorry if I could not catch your idea.
>
> Feel free to update the etherpad.
>
> Regards,
> Arvind
>
>
> -Original Message-
> From: Tiwari, Arvind
> Sent: Tuesday, November 26, 2013 4:08 PM
> To: David Chadwick; OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [keystone] Service scoped role definition
>
> Hi David,
>
> Thanks for your time and valuable comments. I have replied to your comments 
> and try to explain why I am advocating to this BP.
>
> Let me know your thoughts, please feel free to update below etherpad
> https://etherpad.openstack.org/p/service-scoped-role-definition
>
> Thanks again,
> Arvind
>
> -Original Message-
> From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
> Sent: Monday, November 25, 2013 12:12 PM
> To: Tiwari, Arvind; OpenStack Development Mailing List
> Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
> Subject: Re: [openstack-dev] [keystone] Service scoped role definition
>
> Hi Arvind
>
> I have just added some comments to your blueprint page
>
> regards
>
> David
>
>
> On 19/11/2013 00:01, Tiwari, Arvind wrote:
>> Hi,
>>
>>   
>>
>> Based on our discussion in design summit , I have redone the service_id
>> binding with roles BP
>> .
>> I have added a new BP (link below) along with detailed use case to
>> support this BP.
>>
>> https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition
>>
>> Below etherpad link has some proposals for Role REST representation and
>> pros and cons analysis
>>
>>   
>>
>> https://etherpad.openstack.org/p/service-scoped-role-definition
>>
>>   
>>
>> Please take look and let me know your thoughts.
>>
>>   
>>
>> It would be awesome if we can discuss it in tomorrow's meeting.
>>
>>   
>>
>> Thanks,
>>
>> Arvind
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Package from Debian for Devstack?

2013-12-02 Thread Sean Dague
On 12/02/2013 06:15 PM, Adam Young wrote:
> In order to provide Devstack a better certificate management example, we
> want to make devstack capable of calling Certmaster.
> 
> This package is in Debian, but not in Ubuntu.
> 
> For example
> http://packages.debian.org/experimental/certmaster
> 
> How does one go about installing a package like this for devstack? Do we
> need to get it into the underlying distro, or is a cross distro install
> acceptable?
> 
> I've tested it by hand and it install and works fine when
> pre-installed.  Its just a distribution problem.

That means it works, right now. But not being in the distro means there
are no guaruntees of it working in the future.

Honestly, at this point I'd -1 an add like this. I think pulling a
package out of experimental is not really inspiring confidence. I'd
suggest looking for a package available in Ubuntu natively.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Package from Debian for Devstack?

2013-12-02 Thread Adam Young
In order to provide Devstack a better certificate management example, we 
want to make devstack capable of calling Certmaster.


This package is in Debian, but not in Ubuntu.

For example
http://packages.debian.org/experimental/certmaster

How does one go about installing a package like this for devstack? Do we 
need to get it into the underlying distro, or is a cross distro install 
acceptable?


I've tested it by hand and it install and works fine when 
pre-installed.  Its just a distribution problem.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Sylvain Bauza
Hi Jarret,

2013/12/2 Jarret Raim 

>
>
> >It's also pretty easy for a stackforge project to opt-in to the global
> >requirements sync job now too.
>
> Are there some docs on how to do this somewhere? I added a task for us to
> complete the work as part of the incubation request here:
> https://wiki.openstack.org/wiki/Barbican/Incubation
>
>
There is a Stackforge how-to on ci.openstack.org that you can read [1].
Alternatively, you can also find our review [2] where we (Climate) sync'd
our requirements.txt with Oslo.

Hope it can help,
-Sylvain

[1] : http://ci.openstack.org/stackforge.html
[2] : https://review.openstack.org/#/c/56578/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystoneclient] Last released keystoneclient does not work on python33

2013-12-02 Thread Georgy Okrokvertskhov
Hi,

I have failed tests in gate-solum-python33 because kesytoneclient fails to
import xmlrpclib.
The exact error is:
"File
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py",
line 42, in 
2013-11-28 18:27:12.655 | import xmlrpclib
2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib
"

Is there any plan to release a new version of keystoneclient with the fix
for that issue? As I see it is fixed in master.

If there is no new release for keystoneclient can you recommend any
workaround for this issue?

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Monty Taylor


On 12/02/2013 05:09 PM, Jarret Raim wrote:
> 
  * Process
  ** Project must be hosted under stackforge (and therefore use git as
 its VCS)
>>>
>>> I see that barbican is now on stackforge,  but python-barbicanclient is
>>> still on github.  Is that being moved soon?
>>>
  ** Project must obey OpenStack coordinated project interface (such as
 tox,
 pbr, global-requirements...)
>>>
>>> Uses tox, but not pbr or global requirements
>>
>> It's also pretty easy for a stackforge project to opt-in to the global
>> requirements sync job now too.
> 
> Are there some docs on how to do this somewhere? I added a task for us to
> complete the work as part of the incubation request here:
> https://wiki.openstack.org/wiki/Barbican/Incubation

I'm not sure there are docs per-se - you can essentially just add your
project name to projects.txt in openstack/requirements. Beware - really
soon this will tie you to the openstack pypi mirror (like, possibly
today) so if you have non-aligned requirements (see below) it may block
you until you either align or get requirements changed.

  ** Project should use oslo libraries or oslo-incubator where
 appropriate
>>>
>>> The list looks reasonable right now.  Barbican should put migrating to
>>> oslo.messaging on the Icehouse roadmap though.
>>
>> *snip*
>>
>>>
>>>
>>> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
>>>
>>> It looks like the only item here not in the global requirements is
>>> Celery, which is licensed under a 3-clause BSD license.
>>
>> I'd like to address the use of Celery.
>>
>> WTF
>>
>> Barbican has been around for 9 months, which means that it does not
>> predate the work that has become oslo.messaging. It doesn't even try. It
>> uses a completely different thing.
>>
>> The use of celery needs to be replaced with oslo. Full stop. I do not
>> believe it makes any sense to spend further time considering a project
>> that's divergent on such a core piece. Which is a shame - because I
>> think that Barbican is important and fills an important need and I want
>> it to be in. BUT - We don't get to end-run around OpenStack project
>> choices by making a new project on the side and then submitting it for
>> incubation. It's going to be a pile of suck to fix this I'm sure, and
>> I'm sure that it's going to delay getting actually important stuff done
>> - but we deal with too much crazy as it is to pull in a non-oslo
>> messaging and event substrata.
> 
> 
> Is the challenge here that celery has some weird license requirements? Or
> that it is a new library?

The second thing - as a thing to address functionality that has been in
OpenStack since the beginning.

> When we started the Barbican project in February of this year,
> oslo.messaging did not exist. If I remember correctly, at the time we were
> doing architecture set up, the messaging piece was not available as a
> standalone library, was not available on PyPi and had no documentation.

That is true. However, the equivalent oslo-incubator pieces were there.
You can also get the latest pre-release library here:

-f
http://tarballs.openstack.org/oslo.messaging/oslo.messaging-1.2.0a11.tar.gz#egg=oslo.messaging-1.2.0a11
oslo.messaging>=1.2.0a11

As soon as we get the wheel publication pre-release code working, those
will go to pypi.

> It looks like the project was moved to its own repo in April. However, I
> can¹t seem to find the docs anywhere? The only thing I see is a design doc
> here [1]. Are there plans for it to be packaged and put into Pypi?
> 
> We are probably overdue to look at oslo.messaging again, but I don¹t think
> it should be a blocker for our incubation. I'm happy to take a look to see
> what we can do during the Icehouse release cycle. Would that be
> sufficient? 

I think it would go a long way if you had a roadmap for how you're going
to move to oslo.messaging from celery. The worrying part for me, to be
honest, is that you didn't sync up with the state of the art in that
subject across openstack but instead went to a library that clearly
no-one else was using. A couple of conversations with markmc or
dhellman, or anyone from oslo, or anyone from any of the projects that
are currently also using Rabbit (or zeromq or qpid) for the reasons
you're using it would have gotten you aligned pretty quickly with how
everyone else is doing this ... OR, and argument could have been made as
to why all of that was a terrible idea and that the whole project should
move to celery.

It's not that oslo-incubator was unknown - you're using other things.

When we look at incubating things, part of it is about the tech, but
part of it is the question about whether or not the project is going to
add to the whole's ability to be greater than the sum of its parts, of
if we're going to inherit another group of folks who will be balkanized
in the corner somewhere.

I don't want to sound punitive, that's not my intent - and I want you
guys to succeed and to be

Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-12-02 Thread Itsuro ODA
Hi Eugene, Iwamoto

> You are correct. Provider attribute will remain in the pool due to API
> compatibility reasons.
I agree with you.

I just wanted to make sure pools in a loadblancer can have
different providers or not. (I think it should be same.)

Thanks
Itsuto Ofa

On Mon, 2 Dec 2013 12:48:30 +0400
Eugene Nikanorov  wrote:

> Hi Iwamoto,
> 
> You are correct. Provider attribute will remain in the pool due to API
> compatibility reasons.
> 
> Thanks,
> Eugene.
> 
> 
> On Mon, Dec 2, 2013 at 9:35 AM, IWAMOTO Toshihiro 
> wrote:
> 
> > At Fri, 29 Nov 2013 07:25:54 +0900,
> > Itsuro ODA wrote:
> > >
> > > Hi Eugene,
> > >
> > > Thank you for the response.
> > >
> > > I have a comment.
> > > I think 'provider' attribute should be added to loadbalance resource
> > > and used rather than pool's 'provider' since I think using multiple
> > > driver within a loadbalancer does not make sense.
> >
> > There can be a 'provider' attribute in a loadbalancer resource, but,
> > to maintain API, the 'provider' attribute in pools should remain the
> > same.
> > Is there any other attribute planned for the loadbalancer resource?
> >
> > > What do you think ?
> > >
> > > I'm looking forward to your code up !
> > >
> > > Thanks.
> > > Itsuro Oda
> > >
> > > On Thu, 28 Nov 2013 16:58:40 +0400
> > > Eugene Nikanorov  wrote:
> > >
> > > > Hi Itsuro,
> > > >
> > > > I've updated the wiki with some examples of cli workflow that
> > illustrate
> > > > proposed API.
> > > > Please see the updated page:
> > > >
> > https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance#API_change
> > > >
> > > > Thanks,
> > > > Eugene.
> >
> > --
> > IWAMOTO Toshihiro
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread Joe Gordon
On Mon, Dec 2, 2013 at 1:55 PM, Maru Newby  wrote:

>
> On Dec 2, 2013, at 10:19 PM, Joe Gordon  wrote:
>
> >
> > On Dec 2, 2013 3:39 AM, "Maru Newby"  wrote:
> > >
> > >
> > > On Dec 2, 2013, at 2:07 AM, Anita Kuno  wrote:
> > >
> > > > Great initiative putting this plan together, Maru. Thanks for doing
> > > > this. Thanks for volunteering to help, Salvatore (I'm thinking of
> asking
> > > > for you to be cloned - once that becomes available.) if you add your
> > > > patch urls (as you create them) to the blueprint Maru started [0]
> that
> > > > would help to track the work.
> > > >
> > > > Armando, thanks for doing this work as well. Could you add the urls
> of
> > > > the patches you reference to the exceptional-conditions blueprint?
> > > >
> > > > For icehouse-1 to be a realistic goal for this assessment and
> clean-up,
> > > > patches for this would need to be up by Tuesday Dec. 3 at the latest
> > > > (does 13:00 UTC sound like a reasonable target?) so that they can
> make
> > > > it through review and check testing, gate testing and merging prior
> to
> > > > the Thursday Dec. 5 deadline for icehouse-1. I would really like to
> see
> > > > this, I just want the timeline to be conscious.
> > >
> > > My mistake, getting this done by Tuesday does not seem realistic.
>  icehouse-2, then.
> > >
> >
> > With icehouse-2 being the nova-network feature freeze reevaluation point
> (possibly lifting it) I think gating on new stacktraces by icehouse-2 is
> too late.  Even a huge whitelist of errors is better then letting new
> errors in.
>
> No question that it needs to happen asap.  If we're talking about
> milestones, though, and icehouse-1 patches need to be in by Tuesday, I
> don't think icehouse-1 is realistic.  It will have to be early in
> icehouse-2.
>
>
Yup, thanks for the clarification.


>
> m.
>
> > >
> > > m.
> > >
> > > >
> > > > I would like to say talk to me tomorrow in -neutron to ensure you are
> > > > getting the support you need to achieve this but I will be flying
> (wifi
> > > > uncertain). I do hope that some additional individuals come forward
> to
> > > > help with this.
> > > >
> > > > Thanks Maru, Salvatore and Armando,
> > > > Anita.
> > > >
> > > > [0]
> > > >
> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > > >
> > > > On 11/30/2013 08:24 PM, Maru Newby wrote:
> > > >>
> > > >> On Nov 28, 2013, at 1:08 AM, Salvatore Orlando 
> wrote:
> > > >>
> > > >>> Thanks Maru,
> > > >>>
> > > >>> This is something my team had on the backlog for a while.
> > > >>> I will push some patches to contribute towards this effort in the
> next few days.
> > > >>>
> > > >>> Let me know if you're already thinking of targeting the completion
> of this job for a specific deadline.
> > > >>
> > > >> I'm thinking this could be a task for those not involved in fixing
> race conditions, and be done in parallel.  I guess that would be for
> icehouse-1 then?  My hope would be that the early signs of race conditions
> would then be caught earlier.
> > > >>
> > > >>
> > > >> m.
> > > >>
> > > >>>
> > > >>> Salvatore
> > > >>>
> > > >>>
> > > >>> On 27 November 2013 17:50, Maru Newby  wrote:
> > > >>> Just a heads up, the console output for neutron gate jobs is about
> to get a lot noisier.  Any log output that contains 'ERROR' is going to be
> dumped into the console output so that we can identify and eliminate
> unnecessary error logging.  Once we've cleaned things up, the presence of
> unexpected (non-whitelisted) error output can be used to fail jobs, as per
> the following Tempest blueprint:
> > > >>>
> > > >>>
> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
> > > >>>
> > > >>> I've filed a related Neutron blueprint for eliminating the
> unnecessary error logging:
> > > >>>
> > > >>>
> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > > >>>
> > > >>> I'm looking for volunteers to help with this effort, please reply
> in this thread if you're willing to assist.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>>
> > > >>> Maru
> > > >>> ___
> > > >>> OpenStack-dev mailing list
> > > >>> OpenStack-dev@lists.openstack.org
> > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>>
> > > >>> ___
> > > >>> OpenStack-dev mailing list
> > > >>> OpenStack-dev@lists.openstack.org
> > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>
> > > >>
> > > >> ___
> > > >> OpenStack-dev mailing list
> > > >> OpenStack-dev@lists.openstack.org
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>
> > > >
> > > >
> > > > ___
> > > > OpenStack-dev mailing list
> > > > OpenStack-dev@lists.openstack.org
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/opens

Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim

>>> Uses tox, but not pbr or global requirements
>> 
>> I added a ŒTasks¹ section for stuff we need to do from this review and
>> I¹ve added these tasks. [4]
>> 
>> [4] https://wiki.openstack.org/wiki/Barbican/Incubation
>
>Awesome. Also, if you don't mind - we should add "testr" to that list.
>We're looking at some things in infra that will need the subunit
>processing.

Added that to our list.



Jarret

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Monty Taylor


On 12/02/2013 04:12 PM, Jarret Raim wrote:
>>
>> The TC is currently working on formalizing requirements for new programs
>> and projects [3].  I figured I would give them a try against this
>> application.
>>
>> First, I'm assuming that the application is for a new program that
>> contains the new project.  The application doesn't make that bit clear,
>> though.
> 
> In looking through the documentation for incubating [1], there doesn¹t
> seem to be any mention of also having to be associated with a program. Is
> it a requirement that all projects belong to a program at this point? If
> so, I guess we would be asking for a new program as I think that
> encryption and key management is a separate concern from the rest of the
> programs listed here [2].
> 
> [1] https://wiki.openstack.org/wiki/Governance/NewProjects
> [2] https://wiki.openstack.org/wiki/Programs
> 
> 
>>> Teams in OpenStack can be created as-needed and grow organically. As
>>> the team
>>> work matures, some technical efforts will be recognized as essential to
>>> the
>>> completion of the OpenStack project mission. By becoming an official
>>> Program,
>>> they place themselves under the authority of the OpenStack Technical
>>> Committee. In return, their contributors get to vote in the Technical
>>> Committee election, and they get some space and time to discuss future
>>> development at our Design Summits. When considering new programs, the
>>> TC will
>>> look into a number of criteria, including (but not limited to):
>>
>>> * Scope
>>> ** Team must have a specific scope, separated from others teams scope
>>
>> I would like to see a statement of scope for Barbican on the
>> application.  It should specifically cover how the scope differs from
>> other programs, in particular the Identity program.
> 
> Happy to add this, I¹ll put it in the wiki today.
> 
>>> ** Team must have a mission statement
>>
>> This is missing.
> 
> We do have a mission statement on the Barbican/Incubation page here:
> https://wiki.openstack.org/wiki/Barbican/Incubation
> 
> 
>>> ** Team should have a lead, elected by the team contributors
>>
>> Was the PTL elected?  I can't seem to find record of that.  If not, I
>> would like to see an election held for the PTL.
> 
> We¹re happy to do an election. Is this something we can do as part of the
> next election cycle? Or something that needs to be done out of band?
> 
> 
>>
>>> ** Team should have a clear way to grant ATC (voting) status to its
>>>significant contributors
>>
>> Related to the above
> 
> I thought that the process of becoming an ATC was pretty well set [3]. Is
> there some specific that Barbican would have to do that is different than
> the ATC rules in the Tech Committee documentation?
> 
> [3] 
> https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
> 
> 
>>
>>> * Deliverables
>>> ** Team should have a number of clear deliverables
>>
>> barbican and python-barbicanclient, I presume.  It would be nice to have
>> this clearly defined on the application.
> 
> I will add a deliverables section, but you are correct.
> 
> 
>> Now, for the project specific requirements:
>>
>>>  Projects wishing to be included in the integrated release of OpenStack
>>> must
>>>  first apply for incubation status. During their incubation period,
>>> they will
>>>  be able to access new resources and tap into other OpenStack programs
>>> (in
>>>  particular the Documentation, QA, Infrastructure and Release
>>> management teams)
>>>  to learn about the OpenStack processes and get assistance in their
>>> integration
>>>  efforts.
>>>  
>>>  The TC will evaluate the project scope and its complementarity with
>>> existing
>>>  integrated projects and other official programs, look into the project
>>>  technical choices, and check a number of requirements, including (but
>>> not
>>>  limited to):
>>>  
>>>  * Scope
>>>  ** Project must have a clear and defined scope
>>
>> This is missing
> 
> As mentioned above, I¹ll add this to the wiki today.
> 
>>
>>>  ** Project should not inadvertently duplicate functionality present in
>>> other
>>> OpenStack projects. If they do, they should have a clear plan and
>>> timeframe
>>> to prevent long-term scope duplication.
>>>  ** Project should leverage existing functionality in other OpenStack
>>> projects
>>> as much as possible
>>
>> I'm going to hold off on diving into this too far until the scope is
>> clarified.
>>
>>>  * Maturity
>>>  ** Project should have a diverse and active team of contributors
>>
>> Using a mailmap file [4]:
>>
>> $ git shortlog -s -e | sort -n -r
>>   172John Wood 
>>   150jfwood 
>>65Douglas Mendizabal 
>>39Jarret Raim 
>>17Malini K. Bhandaru 
>>10Paul Kehrer 
>>10Jenkins 
>> 8jqxin2006 
>> 7Arash Ghoreyshi 
>> 5Chad Lung 
>> 3Dolph Mathews 
>> 2John Vrbanac 
>> 1Steven Gonzales 
>>

Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim

>>>  * Process
>>>  ** Project must be hosted under stackforge (and therefore use git as
>>>its VCS)
>> 
>> I see that barbican is now on stackforge,  but python-barbicanclient is
>> still on github.  Is that being moved soon?
>> 
>>>  ** Project must obey OpenStack coordinated project interface (such as
>>>tox,
>>> pbr, global-requirements...)
>> 
>> Uses tox, but not pbr or global requirements
>
>It's also pretty easy for a stackforge project to opt-in to the global
>requirements sync job now too.

Are there some docs on how to do this somewhere? I added a task for us to
complete the work as part of the incubation request here:
https://wiki.openstack.org/wiki/Barbican/Incubation


>>>  ** Project should use oslo libraries or oslo-incubator where
>>>appropriate
>> 
>> The list looks reasonable right now.  Barbican should put migrating to
>> oslo.messaging on the Icehouse roadmap though.
>
>*snip*
>
>> 
>> 
>>http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
>> 
>> It looks like the only item here not in the global requirements is
>> Celery, which is licensed under a 3-clause BSD license.
>
>I'd like to address the use of Celery.
>
>WTF
>
>Barbican has been around for 9 months, which means that it does not
>predate the work that has become oslo.messaging. It doesn't even try. It
>uses a completely different thing.
>
>The use of celery needs to be replaced with oslo. Full stop. I do not
>believe it makes any sense to spend further time considering a project
>that's divergent on such a core piece. Which is a shame - because I
>think that Barbican is important and fills an important need and I want
>it to be in. BUT - We don't get to end-run around OpenStack project
>choices by making a new project on the side and then submitting it for
>incubation. It's going to be a pile of suck to fix this I'm sure, and
>I'm sure that it's going to delay getting actually important stuff done
>- but we deal with too much crazy as it is to pull in a non-oslo
>messaging and event substrata.


Is the challenge here that celery has some weird license requirements? Or
that it is a new library?

When we started the Barbican project in February of this year,
oslo.messaging did not exist. If I remember correctly, at the time we were
doing architecture set up, the messaging piece was not available as a
standalone library, was not available on PyPi and had no documentation.

It looks like the project was moved to its own repo in April. However, I
can¹t seem to find the docs anywhere? The only thing I see is a design doc
here [1]. Are there plans for it to be packaged and put into Pypi?

We are probably overdue to look at oslo.messaging again, but I don¹t think
it should be a blocker for our incubation. I'm happy to take a look to see
what we can do during the Icehouse release cycle. Would that be
sufficient? 


[1] https://wiki.openstack.org/wiki/Oslo/Messaging




Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti


On 02/dic/2013, at 23:47, "Vishvananda Ishaya" 
mailto:vishvana...@gmail.com>> wrote:


On Dec 2, 2013, at 11:40 AM, Alessandro Pilotti 
mailto:apilo...@cloudbasesolutions.com>> wrote:


On 02 Dec 2013, at 20:54 , Vishvananda Ishaya 
mailto:vishvana...@gmail.com>> wrote:

Very cool stuff!

Seeing your special glance properties for iso and floppy connections made me 
think
of something. They seem, but it would be nice if they were done in a way that 
would
work in any hypervisor.

I "think" we have sufficient detail in block_device_mapping to do essentially 
the
same thing, and it would be awesome to verify and add some nicities to the nova
cli, something like:


Thanks Vish!

I was also thinking about bringing this to any hypervisor.

About the block storage option, we would have an issue with Hyper-V. We need 
local (or SMB accessible) ISOs and floppy images to be assigned to the 
instances.
IMO this isn’t a bad thing: ISOs would be potentially shared among instances in 
read only mode and it’s easy to pull them from Glance during live migration.
Floppy images on the other hand are insignificant quantity. :-)

I'm not sure exactly what the problem is here. Pulling them down from glance 
before boot seems like the way that other hypervisors would implement it as 
well. The block device mapping code was extended in havana so it doesn't just 
support volume/iscsi connections. You can specify where the item should be 
attached in addition to the source of the device (glance, cinder, etc.).

I have to say that I still used it for iSCSI volumes only. If Glance is 
supported, not only all my objections below are irrelevant, but it's also a 
great solution! Adding it right away!




nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to blank 
image)


This would be great! For the reasons above, I’d go anyway with some simple 
extensions to pass in the ISO and floppy refs in the instance data instead of 
block device mappings.

I think my above comment handles this. Block device mapping was designed to 
support floppies and ISOs as well


There’s also one additional scenario that would greatly benefit from those 
options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
need to access the product media for installation and due to
license constraints the tenant needs to provide the media, we cannot simply 
download them. So far we solved it by attaching a volume containing the install 
media, but it’s of course a very unnatural process for the user.

Couldn't this also be handled by above i.e. upload the install media to glance 
as an iso instead of a volume?

Vish


Alessandro


Clearly that requires a few things:

1) vix block device mapping support
2) cli ux improvements
3) testing!

Vish

On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
mailto:apilo...@cloudbasesolutions.com>> wrote:

Hi all,

At Cloudbase we are heavily using VMware Workstation and Fusion for 
development, demos and PoCs, so we thought: why not replacing our automation 
scripts with a fully functional Nova driver and use OpenStack APIs and Heat for 
the automation? :-)

Here’s the repo for this Nova driver project: 
https://github.com/cloudbase/nova-vix-driver/

The driver is already working well and supports all the basic features you’d 
expect from a Nova driver, including a VNC console accessible via Horizon. 
Please refer to the project README for additional details.
The usage of CoW images (linked clones) makes deploying images particularly 
fast, which is a good thing when you develop or run demos. Heat or Puppet, 
Chef, etc make the whole process particularly sweet of course.


The main idea was to create something to be used in place of solutions like 
Vagrant, with a few specific requirements:

1) Full support for nested virtualization (VMX and EPT).

For the time being the VMware products are the only ones supporting Hyper-V and 
KVM as guests, so this became a mandatory path, at least until EPT support will 
be fully functional in KVM.
This rules out Vagrant as an option. Their VMware support is not free and 
beside that they don’t support nested virtualization (yet, AFAIK).

Other workstation virtualization options, including VirtualBox and Hyper-V are 
currently ruled out due to the lack of support for this feature as well.
Beside that Hyper-V and VMware Workstation VMs can work side by side on Windows 
8.1, all you need is to fire up two nova-compute instances.

2) Work on Windows, Linux and OS X workstations

Here’s a snapshot of Nova compute  running on OS X and showing Novnc connected 
to a Fusion VM console:

https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png

3) Use OpenStack APIs

We wanted to have a single common framework for automation and bring OpenStack 
on the workstations.
Beside that, dogfooding is a good thing. :-)

4) Offer a free alternative for community contributions

VMware Player is fair enough, even with the “non commercial use” limits, etc.

Communication

Re: [openstack-dev] [Murano] [Neutron] [Fuel] Implementing Elastic Applications

2013-12-02 Thread David Easter
Support for deploying the Neutron LBaaS is on the roadmap for the Fuel
project, yes ­ but most likely not before IceHouse at current velocity.

- David J. Easter
  Product Line Manager,  Mirantis

-- Forwarded message --
From: Serg Melikyan 
Date: Wed, Nov 27, 2013 at 6:52 PM
Subject: Re: [openstack-dev] [Murano] [Neutron] [Fuel] Implementing Elastic
Applications
To: "OpenStack Development Mailing List (not for usage questions)"
, fuel-...@lists.launchpad.net


I had added Neutron and Fuel teams to this e-mail thread: Guys what is your
thoughts on the subject?

We see three possible ways to implement Elastic Applications in Murano:
using Heat & Neutron LBaaS, Heat & AWS::ElasticLoadBalancing::LoadBalancer
resource and own solution using HAProxy directly (see more details in the
mail-thread).

Previously we was using Heat and AWS::ElasticLoadBalancing::LoadBalancer
resource, but this way have certain limitations.

Does Fuel team have plans to implement support for Neutron LBaaS any time
soon? 

Guys from Heat suggest Neutron LBaaS as a best long-term solution. Neutron
Team - what is your thoughts?


On Fri, Nov 15, 2013 at 6:53 PM, Thomas Hervé  wrote:
> On Fri, Nov 15, 2013 at 12:56 PM, Serg Melikyan 
> wrote:
>> > Murano has several applications which support scaling via load-balancing,
>> > this applications (Internet Information Services Web Farm, ASP.NET
>> 
>> > Application Web Farm) currently are based on Heat, particularly on resource
>> > called AWS::ElasticLoadBalancing::LoadBalancer, that currently does not
>> > support specification of any network related parameters.
>> >
>> > Inability to specify network related params leads to incorrect behavior
>> > during deployment in tenants with advanced Quantum deployment
>> configuration,
>> > like Per-tenant Routers with Private Networks and this makes deployment of
>> > our * Web Farm applications to fail.
>> >
>> > We need to resolve issues with our * Web Farm, and make this applications
>> to
>> > be reference implementation for elastic applications in Murano.
>> >
>> > This issue may be resolved in three ways: via extending configuration
>> > capabilities of AWS::ElasticLoadBalancing::LoadBalancer, using another
>> > implementation of load balancing in Heat - OS::Neutron::LoadBalancer or via
>> > implementing own load balancing application (that going to balance other
>> > apllications), for example based on HAProxy (as all previous ones).
>> >
>> > Please, respond with your thoughts on the question: "Which implementation
>> we
>> > should use to resolve issue with our Web Farm applications and why?". Below
>> > you can find more details about each of the options.
>> >
>> > AWS::ElasticLoadBalancing::LoadBalancer
>> >
>> > AWS::ElasticLoadBalancing::LoadBalancer is Amazon Cloud Formation
>> compatible
>> > resource that implements load balancer via hard-coded nested stack that
>> > deploys and configures HAProxy. This resource requires specific image with
>> > CFN Tools and specific name F17-x86_64-cfntools available in Glance. It's
>> > look like we miss implementation of only one property in this resource -
>> > Subnets.
>> >
>> > OS::Neutron::LoadBalancer
>> >
>> > OS::Neutron::LoadBalancer is another Heat resource that implements load
>> > balancer. This resource is based on Load Balancer as a Service feature in
>> > Neutron. OS::Neutron::LoadBalancer is much more configurable and
>> > sophisticated but underlying implementation makes usage of this resource
>> > quite complex.
>> > LBaaS is a set of services installed and configured as a part of Neutron.
>> > Fuel does not support LBaaS; Devstack has support for LBaaS, but LBaaS not
>> > installed by default with Neutron.
>> >
>> > Own, Based on HAProxy
>> >
>> > We may implement load-balancer as a regular application in Murano using
>> > HAProxy. This service may look like our Active Directory application with
>> > almost same user-expirience. User may create load-balancer inside of the
>> > environment and join any web-application (with any number of instances)
>> > directly to load-balancer.
>> > Load-balancer may be also implemented on Conductor workflows level, this
>> > implementation strategy not going to change user-experience (in fact we
>> > changing only underlying implementation details for our * Web Farm
>> > applications, without introducing new ones).
> 
> Hi,
> 
> I would strongly encourage using OS::Neutron::LoadBalancer. The AWS
> resource is supposed to mirror Amazon capabilities, so any extension,
> while not impossible, is frowned upon. On the other hand the Neutron
> load balancer can be extended to your need, and being able to use an
> API gives you much more flexibility. It also in active development and
> will get more interesting features in the future.
> 
> If you're having concerns about deploying Neutron LBaaS, you should
> bring it up with the team, and I'm sure they can improve the
> situation. My limited experience with it in devst

Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread Maru Newby

On Dec 2, 2013, at 10:19 PM, Joe Gordon  wrote:

> 
> On Dec 2, 2013 3:39 AM, "Maru Newby"  wrote:
> >
> >
> > On Dec 2, 2013, at 2:07 AM, Anita Kuno  wrote:
> >
> > > Great initiative putting this plan together, Maru. Thanks for doing
> > > this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
> > > for you to be cloned - once that becomes available.) if you add your
> > > patch urls (as you create them) to the blueprint Maru started [0] that
> > > would help to track the work.
> > >
> > > Armando, thanks for doing this work as well. Could you add the urls of
> > > the patches you reference to the exceptional-conditions blueprint?
> > >
> > > For icehouse-1 to be a realistic goal for this assessment and clean-up,
> > > patches for this would need to be up by Tuesday Dec. 3 at the latest
> > > (does 13:00 UTC sound like a reasonable target?) so that they can make
> > > it through review and check testing, gate testing and merging prior to
> > > the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
> > > this, I just want the timeline to be conscious.
> >
> > My mistake, getting this done by Tuesday does not seem realistic.  
> > icehouse-2, then.
> >
> 
> With icehouse-2 being the nova-network feature freeze reevaluation point 
> (possibly lifting it) I think gating on new stacktraces by icehouse-2 is too 
> late.  Even a huge whitelist of errors is better then letting new errors in. 

No question that it needs to happen asap.  If we're talking about milestones, 
though, and icehouse-1 patches need to be in by Tuesday, I don't think 
icehouse-1 is realistic.  It will have to be early in icehouse-2.


m.

> >
> > m.
> >
> > >
> > > I would like to say talk to me tomorrow in -neutron to ensure you are
> > > getting the support you need to achieve this but I will be flying (wifi
> > > uncertain). I do hope that some additional individuals come forward to
> > > help with this.
> > >
> > > Thanks Maru, Salvatore and Armando,
> > > Anita.
> > >
> > > [0]
> > > https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > >
> > > On 11/30/2013 08:24 PM, Maru Newby wrote:
> > >>
> > >> On Nov 28, 2013, at 1:08 AM, Salvatore Orlando  
> > >> wrote:
> > >>
> > >>> Thanks Maru,
> > >>>
> > >>> This is something my team had on the backlog for a while.
> > >>> I will push some patches to contribute towards this effort in the next 
> > >>> few days.
> > >>>
> > >>> Let me know if you're already thinking of targeting the completion of 
> > >>> this job for a specific deadline.
> > >>
> > >> I'm thinking this could be a task for those not involved in fixing race 
> > >> conditions, and be done in parallel.  I guess that would be for 
> > >> icehouse-1 then?  My hope would be that the early signs of race 
> > >> conditions would then be caught earlier.
> > >>
> > >>
> > >> m.
> > >>
> > >>>
> > >>> Salvatore
> > >>>
> > >>>
> > >>> On 27 November 2013 17:50, Maru Newby  wrote:
> > >>> Just a heads up, the console output for neutron gate jobs is about to 
> > >>> get a lot noisier.  Any log output that contains 'ERROR' is going to be 
> > >>> dumped into the console output so that we can identify and eliminate 
> > >>> unnecessary error logging.  Once we've cleaned things up, the presence 
> > >>> of unexpected (non-whitelisted) error output can be used to fail jobs, 
> > >>> as per the following Tempest blueprint:
> > >>>
> > >>> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
> > >>>
> > >>> I've filed a related Neutron blueprint for eliminating the unnecessary 
> > >>> error logging:
> > >>>
> > >>> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > >>>
> > >>> I'm looking for volunteers to help with this effort, please reply in 
> > >>> this thread if you're willing to assist.
> > >>>
> > >>> Thanks,
> > >>>
> > >>>
> > >>> Maru
> > >>> ___
> > >>> OpenStack-dev mailing list
> > >>> OpenStack-dev@lists.openstack.org
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>> ___
> > >>> OpenStack-dev mailing list
> > >>> OpenStack-dev@lists.openstack.org
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 12:38 PM, Russell Bryant  wrote:

> On 12/02/2013 03:31 PM, Vishvananda Ishaya wrote:
>> 
>> On Dec 2, 2013, at 9:12 AM, Russell Bryant 
>> wrote:
>> 
>>> On 12/02/2013 10:59 AM, Gary Kotton wrote:
 I think that this is certainly different. It is something that
 we we want and need a user facing API. Examples: - aggregates -
 per host scheduling - instance groups
 
 Etc.
 
 That is just taking the nova options into account and not the
 other modules. How doul one configure that we would like to
 have storage proximity for a VM? This is where things start to
 get very interesting and enable the cross service scheduling
 (which is the goal of this no?).
>>> 
>>> An explicit part of this plan is that all of the things you're
>>> talking about are *not* in scope until the forklift is complete
>>> and the new thing is a functional replacement for the existing
>>> nova-scheduler.  We want to get the project established and going
>>> so that it is a place where this work can take place.  We do
>>> *not* want to slow down the work of getting the project
>>> established by making these things a prerequisite.
>> 
>> 
>> I'm all for the forklift approach since I don't think we will make
>> any progress if we stall going back into REST api design.
>> 
>> I'm going to reopen a can of worms, though. I think the most
>> difficult part of the forklift will be moving stuff out of the
>> existing databases into a new database. We had to deal with this in
>> cinder and having a db export and import strategy is annoying to
>> say the least. Managing the db-related code was the majority of the
>> work during the cinder split.
>> 
>> I think this forklift will be way easier if we merge the
>> no-db-scheduler[1] patches first before separating the scheduler
>> out into its own project:
>> 
>> https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
>> 
>> I think the effort to get this finished is smaller than the effort
>> to write db migrations and syncing scripts for the new project.
> 
> Agreed that this should make it easier.
> 
> My thought was that the split out scheduler could just import nova's
> db API and use it against nova's database directly until this work
> gets done.  If the forklift goes that route instead of any sort of
> work to migrate data from one db to another, they could really happen
> in any order, I think.


That is a good point. If the forklift is still talking to nova's db
then it would be significantly less duplication and i could see doing
it in the reverse order. The no-db-stuff should be done before trying
to implement cinder support so we don't have the messiness of the
scheduler talking to multiple db apis.

Vish

> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 11:40 AM, Alessandro Pilotti 
 wrote:

> 
> On 02 Dec 2013, at 20:54 , Vishvananda Ishaya  wrote:
> 
>> Very cool stuff!
>> 
>> Seeing your special glance properties for iso and floppy connections made me 
>> think
>> of something. They seem, but it would be nice if they were done in a way 
>> that would
>> work in any hypervisor.
>> 
>> I "think" we have sufficient detail in block_device_mapping to do 
>> essentially the
>> same thing, and it would be awesome to verify and add some nicities to the 
>> nova
>> cli, something like:
>> 
> 
> Thanks Vish!
> 
> I was also thinking about bringing this to any hypervisor.
> 
> About the block storage option, we would have an issue with Hyper-V. We need 
> local (or SMB accessible) ISOs and floppy images to be assigned to the 
> instances.
> IMO this isn’t a bad thing: ISOs would be potentially shared among instances 
> in read only mode and it’s easy to pull them from Glance during live 
> migration. 
> Floppy images on the other hand are insignificant quantity. :-)

I'm not sure exactly what the problem is here. Pulling them down from glance 
before boot seems like the way that other hypervisors would implement it as 
well. The block device mapping code was extended in havana so it doesn't just 
support volume/iscsi connections. You can specify where the item should be 
attached in addition to the source of the device (glance, cinder, etc.).

> 
> 
>> nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to 
>> blank image)
>> 
> 
> This would be great! For the reasons above, I’d go anyway with some simple 
> extensions to pass in the ISO and floppy refs in the instance data instead of 
> block device mappings.

I think my above comment handles this. Block device mapping was designed to 
support floppies and ISOs as well

> 
> There’s also one additional scenario that would greatly benefit from those 
> options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
> need to access the product media for installation and due to 
> license constraints the tenant needs to provide the media, we cannot simply 
> download them. So far we solved it by attaching a volume containing the 
> install media, but it’s of course a very unnatural process for the user.

Couldn't this also be handled by above i.e. upload the install media to glance 
as an iso instead of a volume?

Vish

> 
> Alessandro
> 
> 
>> Clearly that requires a few things:
>> 
>> 1) vix block device mapping support
>> 2) cli ux improvements
>> 3) testing!
>> 
>> Vish
>> 
>> On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
>>  wrote:
>> 
>>> Hi all,
>>> 
>>> At Cloudbase we are heavily using VMware Workstation and Fusion for 
>>> development, demos and PoCs, so we thought: why not replacing our 
>>> automation scripts with a fully functional Nova driver and use OpenStack 
>>> APIs and Heat for the automation? :-)
>>> 
>>> Here’s the repo for this Nova driver project: 
>>> https://github.com/cloudbase/nova-vix-driver/
>>> 
>>> The driver is already working well and supports all the basic features 
>>> you’d expect from a Nova driver, including a VNC console accessible via 
>>> Horizon. Please refer to the project README for additional details.
>>> The usage of CoW images (linked clones) makes deploying images particularly 
>>> fast, which is a good thing when you develop or run demos. Heat or Puppet, 
>>> Chef, etc make the whole process particularly sweet of course. 
>>> 
>>> 
>>> The main idea was to create something to be used in place of solutions like 
>>> Vagrant, with a few specific requirements:
>>> 
>>> 1) Full support for nested virtualization (VMX and EPT).
>>> 
>>> For the time being the VMware products are the only ones supporting Hyper-V 
>>> and KVM as guests, so this became a mandatory path, at least until EPT 
>>> support will be fully functional in KVM.
>>> This rules out Vagrant as an option. Their VMware support is not free and 
>>> beside that they don’t support nested virtualization (yet, AFAIK). 
>>> 
>>> Other workstation virtualization options, including VirtualBox and Hyper-V 
>>> are currently ruled out due to the lack of support for this feature as well.
>>> Beside that Hyper-V and VMware Workstation VMs can work side by side on 
>>> Windows 8.1, all you need is to fire up two nova-compute instances.
>>> 
>>> 2) Work on Windows, Linux and OS X workstations
>>> 
>>> Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
>>> connected to a Fusion VM console:
>>> 
>>> https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
>>> 
>>> 3) Use OpenStack APIs
>>> 
>>> We wanted to have a single common framework for automation and bring 
>>> OpenStack on the workstations. 
>>> Beside that, dogfooding is a good thing. :-) 
>>> 
>>> 4) Offer a free alternative for community contributions
>>>   
>>> VMware Player is fair enough, even with the “non commercial use” limits, 
>>> etc.
>>> 
>>> Communication with VMware compon

Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim
>
>The TC is currently working on formalizing requirements for new programs
>and projects [3].  I figured I would give them a try against this
>application.
>
>First, I'm assuming that the application is for a new program that
>contains the new project.  The application doesn't make that bit clear,
>though.

In looking through the documentation for incubating [1], there doesn¹t
seem to be any mention of also having to be associated with a program. Is
it a requirement that all projects belong to a program at this point? If
so, I guess we would be asking for a new program as I think that
encryption and key management is a separate concern from the rest of the
programs listed here [2].

[1] https://wiki.openstack.org/wiki/Governance/NewProjects
[2] https://wiki.openstack.org/wiki/Programs


>> Teams in OpenStack can be created as-needed and grow organically. As
>>the team
>> work matures, some technical efforts will be recognized as essential to
>>the
>> completion of the OpenStack project mission. By becoming an official
>>Program,
>> they place themselves under the authority of the OpenStack Technical
>> Committee. In return, their contributors get to vote in the Technical
>> Committee election, and they get some space and time to discuss future
>> development at our Design Summits. When considering new programs, the
>>TC will
>> look into a number of criteria, including (but not limited to):
>
>> * Scope
>> ** Team must have a specific scope, separated from others teams scope
>
>I would like to see a statement of scope for Barbican on the
>application.  It should specifically cover how the scope differs from
>other programs, in particular the Identity program.

Happy to add this, I¹ll put it in the wiki today.

>> ** Team must have a mission statement
>
>This is missing.

We do have a mission statement on the Barbican/Incubation page here:
https://wiki.openstack.org/wiki/Barbican/Incubation


>> ** Team should have a lead, elected by the team contributors
>
>Was the PTL elected?  I can't seem to find record of that.  If not, I
>would like to see an election held for the PTL.

We¹re happy to do an election. Is this something we can do as part of the
next election cycle? Or something that needs to be done out of band?


>
>> ** Team should have a clear way to grant ATC (voting) status to its
>>significant contributors
>
>Related to the above

I thought that the process of becoming an ATC was pretty well set [3]. Is
there some specific that Barbican would have to do that is different than
the ATC rules in the Tech Committee documentation?

[3] 
https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee


>
>> * Deliverables
>> ** Team should have a number of clear deliverables
>
>barbican and python-barbicanclient, I presume.  It would be nice to have
>this clearly defined on the application.

I will add a deliverables section, but you are correct.


>Now, for the project specific requirements:
>
>>  Projects wishing to be included in the integrated release of OpenStack
>>must
>>  first apply for incubation status. During their incubation period,
>>they will
>>  be able to access new resources and tap into other OpenStack programs
>>(in
>>  particular the Documentation, QA, Infrastructure and Release
>>management teams)
>>  to learn about the OpenStack processes and get assistance in their
>>integration
>>  efforts.
>>  
>>  The TC will evaluate the project scope and its complementarity with
>>existing
>>  integrated projects and other official programs, look into the project
>>  technical choices, and check a number of requirements, including (but
>>not
>>  limited to):
>>  
>>  * Scope
>>  ** Project must have a clear and defined scope
>
>This is missing

As mentioned above, I¹ll add this to the wiki today.

>
>>  ** Project should not inadvertently duplicate functionality present in
>>other
>> OpenStack projects. If they do, they should have a clear plan and
>>timeframe
>> to prevent long-term scope duplication.
>>  ** Project should leverage existing functionality in other OpenStack
>>projects
>> as much as possible
>
>I'm going to hold off on diving into this too far until the scope is
>clarified.
>
>>  * Maturity
>>  ** Project should have a diverse and active team of contributors
>
>Using a mailmap file [4]:
>
>$ git shortlog -s -e | sort -n -r
>   172 John Wood 
>   150 jfwood 
>65 Douglas Mendizabal 
>39 Jarret Raim 
>17 Malini K. Bhandaru 
>10 Paul Kehrer 
>10 Jenkins 
> 8 jqxin2006 
> 7 Arash Ghoreyshi 
> 5 Chad Lung 
> 3 Dolph Mathews 
> 2 John Vrbanac 
> 1 Steven Gonzales 
> 1 Russell Bryant 
> 1 Bryan D. Payne 
>
>It appears to be an effort done by a group, and not an individual.  Most
>commits by far are from Rackspace, but there is at least one non-trivial
>contributor (Malini) from another company (Intel), so I think this is OK.

There has been some interest from some quarters (RedHat, HP and others) in
addition

Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Kyle Mestery (kmestery)
Yes, this is all great Salvatore and Armando! Thank you for all of this work
and the explanation behind it all.

Kyle

On Dec 2, 2013, at 2:24 PM, Eugene Nikanorov  wrote:

> Salvatore and Armando, thanks for your great work and detailed explanation!
> 
> Eugene.
> 
> 
> On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon  wrote:
> 
> On Dec 2, 2013 9:04 PM, "Salvatore Orlando"  wrote:
> >
> > Hi,
> >
> > As you might have noticed, there has been some progress on parallel tests 
> > for neutron.
> > In a nutshell:
> > * Armando fixed the issue with IP address exhaustion on the public network 
> > [1]
> > * Salvatore has now a patch which has a 50% success rate (the last failures 
> > are because of me playing with it) [2]
> > * Salvatore is looking at putting back on track full isolation [3]
> > * All the bugs affecting parallel tests can be queried here [10]
> > * This blueprint tracks progress made towards enabling parallel testing [11]
> >
> > -
> > The long story is as follows:
> > Parallel testing basically is not working because parallelism means higher 
> > contention for public IP addresses. This was made worse by the fact that 
> > some tests created a router with a gateway set but never deleted it. As a 
> > result, there were even less addresses in the public range.
> > [1] was already merged and with [4] we shall make the public network for 
> > neutron a /24 (the full tempest suite is still showing a lot of IP 
> > exhaustion errors).
> >
> > However, this was just one part of the issue. The biggest part actually 
> > lied with the OVS agent and its interactions with the ML2 plugin. A few 
> > patches ([5], [6], [7]) were already pushed to reduce the number of 
> > notifications sent from the plugin to the agent. However, the agent is 
> > organised in a way such that a notification is immediately acted upon thus 
> > preempting the main agent loop, which is the one responsible for wiring 
> > ports into networks. Considering the high level of notifications currently 
> > sent from the server, this becomes particularly wasteful if one consider 
> > that security membership updates for ports trigger global 
> > iptables-save/restore commands which are often executed in rapid 
> > succession, thus resulting in long delays for wiring VIFs to the 
> > appropriate network.
> > With the patch [2] we are refactoring the agent to make it more efficient. 
> > This is not production code, but once we'll get close to 100% pass for 
> > parallel testing this patch will be split in several patches, properly 
> > structured, and hopefully easy to review.
> > It is worth noting there is still work to do: in some cases the loop still 
> > takes too long, and it has been observed ovs commands taking even 10 
> > seconds to complete. To this aim, it is worth considering use of async 
> > processes introduced in [8] as well as leveraging ovsdb monitoring [9] for 
> > limiting queries to ovs database.
> > We're still unable to explain some failures where the network appears to be 
> > correctly wired (floating IP, router port, dhcp port, and VIF port), but 
> > the SSH connection fails. We're hoping to reproduce this failure patter 
> > locally.
> >
> > Finally, the tempest patch for full tempest isolation should be made usable 
> > soon. Having another experimental job for it is something worth considering 
> > as for some reason it is not always easy reproducing the same failure modes 
> > exhibited on the gate.
> >
> > Regards,
> > Salvatore
> >
> 
> Awesome work, thanks for the update.
> 
> 
> > [1] https://review.openstack.org/#/c/58054/
> > [2] https://review.openstack.org/#/c/57420/
> > [3] https://review.openstack.org/#/c/53459/
> > [4] https://review.openstack.org/#/c/58284/
> > [5] https://review.openstack.org/#/c/58860/
> > [6] https://review.openstack.org/#/c/58597/
> > [7] https://review.openstack.org/#/c/58415/
> > [8] https://review.openstack.org/#/c/45676/
> > [9] https://bugs.launchpad.net/neutron/+bug/1177973
> > [10] 
> > https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY
> > [11] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Chris Friesen

On 12/02/2013 02:31 PM, Vishvananda Ishaya wrote:


I'm going to reopen a can of worms, though. I think the most difficult part of
the forklift will be moving stuff out of the existing databases into
a new database.


Do we really need to move it to a new database for the forklift?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][TripleO] Nested resources

2013-12-02 Thread Fox, Kevin M
Hi all,

I just want to run a crazy idea up the flag pole. TripleO has the concept of an 
under and over cloud. In starting to experiment with Docker, I see a pattern 
start to emerge.

 * As a User, I may want to allocate a BareMetal node so that it is entirely 
mine. I may want to run multiple VM's on it to reduce my own cost. Now I have 
to manage the BareMetal nodes myself or nest OpenStack into them.
 * As a User, I may want to allocate a VM. I then want to run multiple Docker 
containers on it to use it more efficiently. Now I have to manage the VM's 
myself or nest OpenStack into them.
 * As a User, I may want to allocate a BareMetal node so that it is entirely 
mine. I then want to run multiple Docker containers on it to use it more 
efficiently. Now I have to manage the BareMetal nodes myself or nest OpenStack 
into them.

I think this can then be generalized to:
As a User, I would like to ask for resources of one type (One AZ?), and be able 
to delegate resources back to Nova so that I can use Nova to subdivide and give 
me access to my resources as a different type. (As a different AZ?)

I think this could potentially cover some of the TripleO stuff without needing 
an over/under cloud. For that use case, all the BareMetal nodes could be added 
to Nova as such, allocated by the "services" tenant as running a nested VM 
image type resource stack, and then made available to all tenants. Sys admins 
then could dynamically shift resources from VM providing nodes to BareMetal 
Nodes and back as needed.

This allows a user to allocate some raw resources as a group, then schedule 
higher level services to run only in that group, all with the existing api.

Just how crazy an idea is this?

Thanks,
Kevin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-02 Thread Joshua Harlow
Thanks for writing this up, looking forward to seeing this happen so that
oslo.messaging can be used outside of the core openstack projects (and be
used in libraries that do not want to force a oslo.cfg model onto users of
said libraries).

Any idea of a timeline as to when this would be reflected in
https://github.com/openstack/oslo.messaging/ (even rough idea is fine).

-Josh

On 12/2/13 7:45 AM, "Julien Danjou"  wrote:

>On Mon, Nov 18 2013, Julien Danjou wrote:
>
>>   https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg
>
>So I've gone through the code and started to write a plan on how I'd do
>things:
>
>  https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg
>
>I don't think I missed too much, though I didn't run into all tiny
>details.
>
>Please feel free to tell me if I miss anything obvious, otherwise I'll
>try to start submitting patches, one at a time, to get this into shape
>step by step.
>
>-- 
>Julien Danjou
>// Free Software hacker / independent consultant
>// http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 03:31 PM, Vishvananda Ishaya wrote:
> 
> On Dec 2, 2013, at 9:12 AM, Russell Bryant 
> wrote:
> 
>> On 12/02/2013 10:59 AM, Gary Kotton wrote:
>>> I think that this is certainly different. It is something that
>>> we we want and need a user facing API. Examples: - aggregates -
>>> per host scheduling - instance groups
>>> 
>>> Etc.
>>> 
>>> That is just taking the nova options into account and not the
>>> other modules. How doul one configure that we would like to
>>> have storage proximity for a VM? This is where things start to
>>> get very interesting and enable the cross service scheduling
>>> (which is the goal of this no?).
>> 
>> An explicit part of this plan is that all of the things you're
>> talking about are *not* in scope until the forklift is complete
>> and the new thing is a functional replacement for the existing
>> nova-scheduler.  We want to get the project established and going
>> so that it is a place where this work can take place.  We do
>> *not* want to slow down the work of getting the project
>> established by making these things a prerequisite.
> 
> 
> I'm all for the forklift approach since I don't think we will make
> any progress if we stall going back into REST api design.
> 
> I'm going to reopen a can of worms, though. I think the most
> difficult part of the forklift will be moving stuff out of the
> existing databases into a new database. We had to deal with this in
> cinder and having a db export and import strategy is annoying to
> say the least. Managing the db-related code was the majority of the
> work during the cinder split.
> 
> I think this forklift will be way easier if we merge the
> no-db-scheduler[1] patches first before separating the scheduler
> out into its own project:
> 
> https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
> 
> I think the effort to get this finished is smaller than the effort
> to write db migrations and syncing scripts for the new project.

Agreed that this should make it easier.

My thought was that the split out scheduler could just import nova's
db API and use it against nova's database directly until this work
gets done.  If the forklift goes that route instead of any sort of
work to migrate data from one db to another, they could really happen
in any order, I think.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 9:12 AM, Russell Bryant  wrote:

> On 12/02/2013 10:59 AM, Gary Kotton wrote:
>> I think that this is certainly different. It is something that we we want
>> and need a user facing API.
>> Examples:
>> - aggregates
>> - per host scheduling
>> - instance groups
>> 
>> Etc.
>> 
>> That is just taking the nova options into account and not the other
>> modules. How doul one configure that we would like to have storage
>> proximity for a VM? This is where things start to get very interesting and
>> enable the cross service scheduling (which is the goal of this no?).
> 
> An explicit part of this plan is that all of the things you're talking
> about are *not* in scope until the forklift is complete and the new
> thing is a functional replacement for the existing nova-scheduler.  We
> want to get the project established and going so that it is a place
> where this work can take place.  We do *not* want to slow down the work
> of getting the project established by making these things a prerequisite.


I'm all for the forklift approach since I don't think we will make any progress
if we stall going back into REST api design.

I'm going to reopen a can of worms, though. I think the most difficult part of
the forklift will be moving stuff out of the existing databases into
a new database. We had to deal with this in cinder and having a db export
and import strategy is annoying to say the least. Managing the db-related
code was the majority of the work during the cinder split.

I think this forklift will be way easier if we merge the no-db-scheduler[1]
patches first before separating the scheduler out into its own project:

https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

I think the effort to get this finished is smaller than the effort to write
db migrations and syncing scripts for the new project.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Eugene Nikanorov
Salvatore and Armando, thanks for your great work and detailed explanation!

Eugene.


On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon  wrote:

>
> On Dec 2, 2013 9:04 PM, "Salvatore Orlando"  wrote:
> >
> > Hi,
> >
> > As you might have noticed, there has been some progress on parallel
> tests for neutron.
> > In a nutshell:
> > * Armando fixed the issue with IP address exhaustion on the public
> network [1]
> > * Salvatore has now a patch which has a 50% success rate (the last
> failures are because of me playing with it) [2]
> > * Salvatore is looking at putting back on track full isolation [3]
> > * All the bugs affecting parallel tests can be queried here [10]
> > * This blueprint tracks progress made towards enabling parallel testing
> [11]
> >
> > -
> > The long story is as follows:
> > Parallel testing basically is not working because parallelism means
> higher contention for public IP addresses. This was made worse by the fact
> that some tests created a router with a gateway set but never deleted it.
> As a result, there were even less addresses in the public range.
> > [1] was already merged and with [4] we shall make the public network for
> neutron a /24 (the full tempest suite is still showing a lot of IP
> exhaustion errors).
> >
> > However, this was just one part of the issue. The biggest part actually
> lied with the OVS agent and its interactions with the ML2 plugin. A few
> patches ([5], [6], [7]) were already pushed to reduce the number of
> notifications sent from the plugin to the agent. However, the agent is
> organised in a way such that a notification is immediately acted upon thus
> preempting the main agent loop, which is the one responsible for wiring
> ports into networks. Considering the high level of notifications currently
> sent from the server, this becomes particularly wasteful if one consider
> that security membership updates for ports trigger global
> iptables-save/restore commands which are often executed in rapid
> succession, thus resulting in long delays for wiring VIFs to the
> appropriate network.
> > With the patch [2] we are refactoring the agent to make it more
> efficient. This is not production code, but once we'll get close to 100%
> pass for parallel testing this patch will be split in several patches,
> properly structured, and hopefully easy to review.
> > It is worth noting there is still work to do: in some cases the loop
> still takes too long, and it has been observed ovs commands taking even 10
> seconds to complete. To this aim, it is worth considering use of async
> processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
> limiting queries to ovs database.
> > We're still unable to explain some failures where the network appears to
> be correctly wired (floating IP, router port, dhcp port, and VIF port), but
> the SSH connection fails. We're hoping to reproduce this failure patter
> locally.
> >
> > Finally, the tempest patch for full tempest isolation should be made
> usable soon. Having another experimental job for it is something worth
> considering as for some reason it is not always easy reproducing the same
> failure modes exhibited on the gate.
> >
> > Regards,
> > Salvatore
> >
>
> Awesome work, thanks for the update.
>
> > [1] https://review.openstack.org/#/c/58054/
> > [2] https://review.openstack.org/#/c/57420/
> > [3] https://review.openstack.org/#/c/53459/
> > [4] https://review.openstack.org/#/c/58284/
> > [5] https://review.openstack.org/#/c/58860/
> > [6] https://review.openstack.org/#/c/58597/
> > [7] https://review.openstack.org/#/c/58415/
> > [8] https://review.openstack.org/#/c/45676/
> > [9] https://bugs.launchpad.net/neutron/+bug/1177973
> > [10]
> https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY
> > [11]
> https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-12-02 Thread Ravi Chunduru
We do had the same problem in our deployment.  Here is the brief
description of what we saw and how we fixed it.
http://l4tol7.blogspot.com/2013/12/openstack-rabbitmq-issues.html


On Mon, Dec 2, 2013 at 10:37 AM, Vishvananda Ishaya
wrote:

>
> On Nov 29, 2013, at 9:24 PM, Chris Friesen 
> wrote:
>
> > On 11/29/2013 06:37 PM, David Koo wrote:
> >> On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
> >>> We're currently running Grizzly (going to Havana soon) and we're
> >>> running into an issue where if the active controller is ungracefully
> >>> killed then nova-compute on the compute node doesn't properly
> >>> connect to the new rabbitmq server on the newly-active controller
> >>> node.
> >
> >>> Interestingly, killing and restarting nova-compute on the compute
> >>> node seems to work, which implies that the retry code is doing
> >>> something less effective than the initial startup.
> >>>
> >>> Has anyone doing HA controller setups run into something similar?
> >
> > As a followup, it looks like if I wait for 9 minutes or so I see a
> message in the compute logs:
> >
> > 2013-11-30 00:02:14.756 1246 ERROR nova.openstack.common.rpc.common [-]
> Failed to consume message from queue: Socket closed
> >
> > It then reconnects to the AMQP server and everything is fine after that.
>  However, any instances that I tried to boot during those 9 minutes stay
> stuck in the "BUILD" status.
> >
> >
> >>
> >> So the rabbitmq server and the controller are on the same node?
> >
> > Yes, they are.
> >
> > > My
> >> guess is that it's related to this bug 856764 (RabbitMQ connections
> >> lack heartbeat or TCP keepalives). The gist of it is that since there
> >> are no heartbeats between the MQ and nova-compute, if the MQ goes down
> >> ungracefully then nova-compute has no way of knowing. If the MQ goes
> >> down gracefully then the MQ clients are notified and so the problem
> >> doesn't arise.
> >
> > Sounds about right.
> >
> >> We got bitten by the same bug a while ago when our controller node
> >> got hard reset without any warning!. It came down to this bug (which,
> >> unfortunately, doesn't have a fix yet). We worked around this bug by
> >> implementing our own crude fix - we wrote a simple app to periodically
> >> check if the MQ was alive (write a short message into the MQ, then
> >> read it out again). When this fails n-times in a row we restart
> >> nova-compute. Very ugly, but it worked!
> >
> > Sounds reasonable.
> >
> > I did notice a kombu heartbeat change that was submitted and then backed
> out again because it was buggy. I guess we're still waiting on the real fix?
>
> Hi Chris,
>
> This general problem comes up a lot, and one fix is to use keepalives.
> Note that more is needed if you are using multi-master rabbitmq, but for
> failover I have had great success with the following (also posted to the
> bug):
>
> When a connection to a socket is cut off completely, the receiving side
> doesn't know that the connection has dropped, so you can end up with a
> half-open connection. The general solution for this in linux is to turn on
> TCP_KEEPALIVES. Kombu will enable keepalives if the version number is high
> enough (>1.0 iirc), but rabbit needs to be specially configured to send
> keepalives on the connections that it creates.
>
> So solving the HA issue generally involves a rabbit config with a section
> like the following:
>
> [
>  {rabbit, [{tcp_listen_options, [binary,
> {packet, raw},
> {reuseaddr, true},
> {backlog, 128},
> {nodelay, true},
> {exit_on_close, false},
> {keepalive, true}]}
>   ]}
> ].
>
> Then you should also shorten the keepalive sysctl settings or it will
> still take ~2 hrs to terminate the connections:
>
> echo "5" > /proc/sys/net/ipv4/tcp_keepalive_time
> echo "5" > /proc/sys/net/ipv4/tcp_keepalive_probes
> echo "1" > /proc/sys/net/ipv4/tcp_keepalive_intvl
>
> Obviously this should be done in a sysctl config file instead of at the
> command line. Note that if you only want to shorten the rabbit keepalives
> but keep everything else as a default, you can use an LD_PRELOAD library to
> do so. For example you could use:
>
> https://github.com/meebey/force_bind/blob/master/README
>
> Vish
>
> >
> > Chris
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Joe Gordon
On Dec 2, 2013 9:04 PM, "Salvatore Orlando"  wrote:
>
> Hi,
>
> As you might have noticed, there has been some progress on parallel tests
for neutron.
> In a nutshell:
> * Armando fixed the issue with IP address exhaustion on the public
network [1]
> * Salvatore has now a patch which has a 50% success rate (the last
failures are because of me playing with it) [2]
> * Salvatore is looking at putting back on track full isolation [3]
> * All the bugs affecting parallel tests can be queried here [10]
> * This blueprint tracks progress made towards enabling parallel testing
[11]
>
> -
> The long story is as follows:
> Parallel testing basically is not working because parallelism means
higher contention for public IP addresses. This was made worse by the fact
that some tests created a router with a gateway set but never deleted it.
As a result, there were even less addresses in the public range.
> [1] was already merged and with [4] we shall make the public network for
neutron a /24 (the full tempest suite is still showing a lot of IP
exhaustion errors).
>
> However, this was just one part of the issue. The biggest part actually
lied with the OVS agent and its interactions with the ML2 plugin. A few
patches ([5], [6], [7]) were already pushed to reduce the number of
notifications sent from the plugin to the agent. However, the agent is
organised in a way such that a notification is immediately acted upon thus
preempting the main agent loop, which is the one responsible for wiring
ports into networks. Considering the high level of notifications currently
sent from the server, this becomes particularly wasteful if one consider
that security membership updates for ports trigger global
iptables-save/restore commands which are often executed in rapid
succession, thus resulting in long delays for wiring VIFs to the
appropriate network.
> With the patch [2] we are refactoring the agent to make it more
efficient. This is not production code, but once we'll get close to 100%
pass for parallel testing this patch will be split in several patches,
properly structured, and hopefully easy to review.
> It is worth noting there is still work to do: in some cases the loop
still takes too long, and it has been observed ovs commands taking even 10
seconds to complete. To this aim, it is worth considering use of async
processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
limiting queries to ovs database.
> We're still unable to explain some failures where the network appears to
be correctly wired (floating IP, router port, dhcp port, and VIF port), but
the SSH connection fails. We're hoping to reproduce this failure patter
locally.
>
> Finally, the tempest patch for full tempest isolation should be made
usable soon. Having another experimental job for it is something worth
considering as for some reason it is not always easy reproducing the same
failure modes exhibited on the gate.
>
> Regards,
> Salvatore
>

Awesome work, thanks for the update.

> [1] https://review.openstack.org/#/c/58054/
> [2] https://review.openstack.org/#/c/57420/
> [3] https://review.openstack.org/#/c/53459/
> [4] https://review.openstack.org/#/c/58284/
> [5] https://review.openstack.org/#/c/58860/
> [6] https://review.openstack.org/#/c/58597/
> [7] https://review.openstack.org/#/c/58415/
> [8] https://review.openstack.org/#/c/45676/
> [9] https://bugs.launchpad.net/neutron/+bug/1177973
> [10]
https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY
> [11]
https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti

On 02 Dec 2013, at 20:54 , Vishvananda Ishaya 
mailto:vishvana...@gmail.com>> wrote:

Very cool stuff!

Seeing your special glance properties for iso and floppy connections made me 
think
of something. They seem, but it would be nice if they were done in a way that 
would
work in any hypervisor.

I "think" we have sufficient detail in block_device_mapping to do essentially 
the
same thing, and it would be awesome to verify and add some nicities to the nova
cli, something like:


Thanks Vish!

I was also thinking about bringing this to any hypervisor.

About the block storage option, we would have an issue with Hyper-V. We need 
local (or SMB accessible) ISOs and floppy images to be assigned to the 
instances.
IMO this isn’t a bad thing: ISOs would be potentially shared among instances in 
read only mode and it’s easy to pull them from Glance during live migration.
Floppy images on the other hand are insignificant quantity. :-)


nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to blank 
image)


This would be great! For the reasons above, I’d go anyway with some simple 
extensions to pass in the ISO and floppy refs in the instance data instead of 
block device mappings.

There’s also one additional scenario that would greatly benefit from those 
options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
need to access the product media for installation and due to
license constraints the tenant needs to provide the media, we cannot simply 
download them. So far we solved it by attaching a volume containing the install 
media, but it’s of course a very unnatural process for the user.

Alessandro


Clearly that requires a few things:

1) vix block device mapping support
2) cli ux improvements
3) testing!

Vish

On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
mailto:apilo...@cloudbasesolutions.com>> wrote:

Hi all,

At Cloudbase we are heavily using VMware Workstation and Fusion for 
development, demos and PoCs, so we thought: why not replacing our automation 
scripts with a fully functional Nova driver and use OpenStack APIs and Heat for 
the automation? :-)

Here’s the repo for this Nova driver project: 
https://github.com/cloudbase/nova-vix-driver/

The driver is already working well and supports all the basic features you’d 
expect from a Nova driver, including a VNC console accessible via Horizon. 
Please refer to the project README for additional details.
The usage of CoW images (linked clones) makes deploying images particularly 
fast, which is a good thing when you develop or run demos. Heat or Puppet, 
Chef, etc make the whole process particularly sweet of course.


The main idea was to create something to be used in place of solutions like 
Vagrant, with a few specific requirements:

1) Full support for nested virtualization (VMX and EPT).

For the time being the VMware products are the only ones supporting Hyper-V and 
KVM as guests, so this became a mandatory path, at least until EPT support will 
be fully functional in KVM.
This rules out Vagrant as an option. Their VMware support is not free and 
beside that they don’t support nested virtualization (yet, AFAIK).

Other workstation virtualization options, including VirtualBox and Hyper-V are 
currently ruled out due to the lack of support for this feature as well.
Beside that Hyper-V and VMware Workstation VMs can work side by side on Windows 
8.1, all you need is to fire up two nova-compute instances.

2) Work on Windows, Linux and OS X workstations

Here’s a snapshot of Nova compute  running on OS X and showing Novnc connected 
to a Fusion VM console:

https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png

3) Use OpenStack APIs

We wanted to have a single common framework for automation and bring OpenStack 
on the workstations.
Beside that, dogfooding is a good thing. :-)

4) Offer a free alternative for community contributions

VMware Player is fair enough, even with the “non commercial use” limits, etc.

Communication with VMware components is based on the freely available Vix SDK 
libs, using ctypes to call the C APIs. The project provides a library to easily 
interact with the VMs, in case it sould be needed, e.g.:

from vix import vixutils
with vixutils.VixConnection() as conn:
with conn.open_vm(vmx_path) as vm:
vm.power_on()

We though about using libvirt, since it has support for those APIs as well, but 
it was way easier to create a lightweight driver from scratch using the Vix 
APIs directly.

TODOs:

1) A minimal Neutron agent for attaching networks (now all networks are 
attached to the NAT interface).
2) Resize disks on boot based on the flavor size
3) Volume attach / detach (we can just reuse the Hyper-V code for the Windows 
case)
4) Same host resize

Live migration is not making particularly sense in this context, so the 
implementation is not planned.

Note: we still have to commit the unit tests. We’ll clean them during next week 
and push them.


As usual, any idea,

Re: [openstack-dev] Fw: [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-12-02 Thread Collins, Sean (Contractor)
Thank you - that'll work great for the short term until Nachi's patch
lands.

We still need a +2 from another Nova dev for a patch that disables
hairpinning when Neutron is being used [1].

The patch to allow ICMPv6 into instances for SLAAC
just landed today[2]. So we're making progress.

[1] https://review.openstack.org/#/c/56381/
[2] https://review.openstack.org/#/c/53028/

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Cloning vs copying images

2013-12-02 Thread Dmitry Borodaenko
Hi OpenStack, particularly Cinder backend developers,

Please consider the following two competing fixes for the same problem:

https://review.openstack.org/#/c/58870/
https://review.openstack.org/#/c/58893/

The problem being fixed is that some backends, specifically Ceph RBD,
can only boot from volumes created from images in a certain format, in
RBD's case, RAW. When an image in a different format gets cloned into
a volume, it cannot be booted from. Obvious solution is to refuse
clone operation and copy/convert the image instead.

And now the principal question: is it safe to assume that this
restriction applies to all backends? Should the fix enforce copy of
non-RAW images for all backends? Or should the decision whether to
clone or copy the image be made in each backend?

The first fix puts this logic into the RBD backend, and makes changes
necessary for all other backends to have enough information to make a
similar decision if necessary. The problem with this approach is that
it's relatively intrusive, because driver clone_image() method
signature has to be changed.

The second fix has significantly less code changes, but it does
prevent cloning non-RAW images for all backends. I am not sure if this
is a real problem or not.

Can anyone point at a backend that can boot from a volume cloned from
a non-RAW image? I can think of one candidate: GPFS is a file-based
backend, while GPFS has a file clone operation. Is GPFS backend able
to boot from, say, a QCOW2 volume?

Thanks,

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti

On 02 Dec 2013, at 04:52 , Kyle Mestery (kmestery)  wrote:

> On Dec 1, 2013, at 4:10 PM, Alessandro Pilotti 
>  wrote:
>> 
>> Hi all,
>> 
>> At Cloudbase we are heavily using VMware Workstation and Fusion for 
>> development, demos and PoCs, so we thought: why not replacing our automation 
>> scripts with a fully functional Nova driver and use OpenStack APIs and Heat 
>> for the automation? :-)
>> 
>> Here’s the repo for this Nova driver project: 
>> https://github.com/cloudbase/nova-vix-driver/
>> 
>> The driver is already working well and supports all the basic features you’d 
>> expect from a Nova driver, including a VNC console accessible via Horizon. 
>> Please refer to the project README for additional details.
>> The usage of CoW images (linked clones) makes deploying images particularly 
>> fast, which is a good thing when you develop or run demos. Heat or Puppet, 
>> Chef, etc make the whole process particularly sweet of course. 
>> 
>> 
>> The main idea was to create something to be used in place of solutions like 
>> Vagrant, with a few specific requirements:
>> 
>> 1) Full support for nested virtualization (VMX and EPT).
>> 
>> For the time being the VMware products are the only ones supporting Hyper-V 
>> and KVM as guests, so this became a mandatory path, at least until EPT 
>> support will be fully functional in KVM.
>> This rules out Vagrant as an option. Their VMware support is not free and 
>> beside that they don’t support nested virtualization (yet, AFAIK). 
>> 
>> Other workstation virtualization options, including VirtualBox and Hyper-V 
>> are currently ruled out due to the lack of support for this feature as well.
>> Beside that Hyper-V and VMware Workstation VMs can work side by side on 
>> Windows 8.1, all you need is to fire up two nova-compute instances.
>> 
>> 2) Work on Windows, Linux and OS X workstations
>> 
>> Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
>> connected to a Fusion VM console:
>> 
>> https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
>> 
>> 3) Use OpenStack APIs
>> 
>> We wanted to have a single common framework for automation and bring 
>> OpenStack on the workstations. 
>> Beside that, dogfooding is a good thing. :-) 
>> 
>> 4) Offer a free alternative for community contributions
>> 
>> VMware Player is fair enough, even with the “non commercial use” limits, etc.
>> 
>> Communication with VMware components is based on the freely available Vix 
>> SDK libs, using ctypes to call the C APIs. The project provides a library to 
>> easily interact with the VMs, in case it sould be needed, e.g.:
>> 
>> from vix import vixutils
>> with vixutils.VixConnection() as conn:
>>with conn.open_vm(vmx_path) as vm:
>>vm.power_on()
>> 
>> We though about using libvirt, since it has support for those APIs as well, 
>> but it was way easier to create a lightweight driver from scratch using the 
>> Vix APIs directly.
>> 
>> TODOs:
>> 
>> 1) A minimal Neutron agent for attaching networks (now all networks are 
>> attached to the NAT interface).
>> 2) Resize disks on boot based on the flavor size
>> 3) Volume attach / detach (we can just reuse the Hyper-V code for the 
>> Windows case)
>> 4) Same host resize
>> 
>> Live migration is not making particularly sense in this context, so the 
>> implementation is not planned. 
>> 
>> Note: we still have to commit the unit tests. We’ll clean them during next 
>> week and push them.
>> 
>> 
>> As usual, any idea, suggestions and especially contributions are highly 
>> welcome!
>> 
>> We’ll follow up with a blog post with some additional news related to this 
>> project quite soon. 
>> 
>> 
> This is very cool Alessandro, thanks for sharing! Any plans to try and get 
> this
> nova driver upstreamed?

Thanks Kyle!

My personal opinion is that drivers should stay outside of Nova in a separate 
project. Said that, this driver is way easier to mantain than the Hyper-V one 
for example, so I wouldn’t have objections if this is what the community 
prefers.
On the other side, this driver will hardly have a CI as it’s not a requirement 
for the expected usage and this wouldn’t fit with the current (correct IMO) 
decision that only drivers with a CI gate will stay in Nova starting with 
Icehouse.
Said that, I would’t have of course anything against somebody (VMware?) 
volunteering for the CI. ;-)

IMO, as also Chmouel suggested in this thread, a Stackforge project could be a 
good start. This should make it easier for people to contribute and we’ll have 
a couple of release cycles to decide what to do with it.

Alessandro


> Thanks,
> Kyle
> 
>> Thanks,
>> 
>> Alessandro
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://li

Re: [openstack-dev] [TripleO] Summit session wrapup

2013-12-02 Thread Jordan OMara

On 01/12/13 00:27 -0500, Tzu-Mainn Chen wrote:
I think we may all be approaching the planning of this project in the wrong way, because of confusions such as: 


Well, I think there is one small misunderstanding. I've never said that
manual way should be primary workflow for us. I agree that we should lean
toward as much automation and smartness as possible. But in the same time, I
am adding that we need manual fallback for user to change that smart
decision.



Primary way would be to let TripleO decide, where the stuff go. I think we
agree here.
That's a pretty fundamental requirement that both sides seem to agree upon - but that agreement got lost in the discussions of what feature should come in which release, etc. That seems backwards to me. 

I think it's far more important that we list out requirements and create a design document that people agree upon first. Otherwise, we run the risk of focusing on feature X for release 1 without ensuring that our architecture supports feature Y for release 2. 

To make this example more specific: it seems clear that everyone agrees that the current Tuskar design (where nodes must be assigned to racks, which are then used as the primary means of manipulation) is not quite correct. Instead, we'd like to introduce a philosophy where we assume that users don't want to deal with homogeneous nodes individually, instead letting TripleO make decisions for them. 



I agree; getting buy-in on a design document up front is going to
save us future anguish

Regarding this - I think we may want to clarify what the purpose of our releases are at the moment. Personally, I don't think our current planning is about several individual product releases that we expect to be production-ready and usable by the world; I think it's about milestone releases which build towards a more complete product. 

From that perspective, if I were a prospective user, I would be less concerned with each release containing exactly what I need. Instead, what I would want most out of the project is: 

a) frequent stable releases (so I can be comfortable with the pace of development and the quality of code) 
b) design documentation and wireframes (so I can be comfortable that the architecture will support features I need) 
c) a roadmap (so I have an idea when my requirements will be met) 



+1
--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgp7rupTEuBS0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-02 Thread Kurt Griffiths
Folks,

Want to surface events to end users?

Following up on some conversations we had at the summit, I’d like to get folks 
together on IRC tomorrow to crystalize the design for a notifications project 
under the Marconi program. The project’s goal is to create a service for 
surfacing events to end users (where a user can be a cloud app developer, or a 
customer using one of those apps). For example, a developer may want to be 
notified when one of their servers is low on disk space. Alternatively, a user 
of MyHipsterApp may want to get a text when one of their friends invites them 
to listen to That Band You’ve Never Heard Of.

Interested? Please join me and other members of the Marconi team tomorrow, Dec. 
3rd, for a brainstorming session in #openstack-marconi at 1500 
UTC. 
Your contributions are crucial to making this project awesome.

I’ve seeded an etherpad for the discussion:

https://etherpad.openstack.org/p/marconi-notifications-brainstorm

@kgriffs




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [Security]

2013-12-02 Thread Paul Montgomery
Yep, certainly interested in a broader OpenStack view of security; sign me
up. :)


On 11/27/13 9:06 PM, "Adrian Otto"  wrote:

>
>On Nov 27, 2013, at 11:39 AM, Nathan Kinder 
> wrote:
>
>> On 11/27/2013 08:58 AM, Paul Montgomery wrote:
>>> I created some relatively high level security best practices that I
>>> thought would apply to Solum.  I don't think it is ever too early to
>>>get
>>> mindshare around security so that developers keep that in mind
>>>throughout
>>> the project.  When a design decision point could easily go two ways,
>>> perhaps these guidelines can sway direction towards a more secure path.
>>> 
>>> This is a living document, please contribute and let's discuss topics.
>>> I've worn a security hat in various jobs so I'm always interested. :)
>>> Also, I realize that many of these features may not directly be
>>> encapsulated by Solum but rather components such as KeyStone or
>>>Horizon.
>>> 
>>> https://wiki.openstack.org/wiki/Solum/Security
>> 
>> This is a great start.
>> 
>> I think we really need to work towards a set of overarching security
>> guidelines and best practices that can be applied to all of the
>> projects.  I know that each project may have unique security needs, but
>> it would be really great to have a central set of agreed upon
>> cross-project guidelines that a developer can follow.
>> 
>> This is a goal that we have in the OpenStack Security Group.  I am happy
>> to work on coordinating this.  For defining these guidelines, I think a
>> "working group" approach might be best, where we have an interested
>> representative from each project be involved.  Does this approach make
>> sense to others?
>
>Nathan, that sounds great. I'd like Paul Montgomery to be involved for
>Solum, and possibly others from Solum if we have volunteers with this
>skill set to join him.
>
>> 
>> Thanks,
>> -NGK
>> 
>>> 
>>> I would like to build on this list and create blueprints or tasks
>>>based on
>>> topics that the community agrees upon.  We will also need to start
>>> thinking about timing of these features.
>>> 
>>> Is there an OpenStack standard for code comments that highlight
>>>potential
>>> security issues to investigate at a later point?  If not, what would
>>>the
>>> community think of making a standard for Solum?  I would like to
>>>identify
>>> these areas early while the developer is still engaged/thinking about
>>>the
>>> code.  It is always harder to go back later and find everything in my
>>> experience.  Perhaps something like:
>>> 
>>> # (SECURITY) This exception may contain database field data which could
>>> expose passwords to end users unless filtered.
>>> 
>>> Or
>>> 
>>> # (SECURITY) The admin password is read in plain text from a
>>>configuration
>>> file.  We should fix this later.
>>> 
>>> Regards,
>>> Paulmo
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-02 Thread Eric Windisch
I'd like to move this conversation along. It seems to have both
stalled and digressed.

> Just like with compute drivers, the raised bar applies to existing
> drivers, not just new ones.  We just gave a significant amount of
> lead-time to reach it.

I agree with this. The bar needs to be raised. The Tempest tests we
have should be passing. If they can't pass, they shouldn't be skipped,
the underlying support in Nova should be fixed. Is anyone arguing
against this?

The GCE code will need Tempest tests too (thankfully, the proposed
patches include Tempest tests). While it might be an even greater
uphill battle for GCE, versus EC2, to gain community momentum, it
cannot ever gain that momentum without an opportunity to do so. I
agree that Stackforge might normally be a reasonable path here, but I
agree with Mark's reservations around tracking the internal Nova APIs.

> I'm actually quite optimistic about the future of EC2 in Nova.  There is
> certainly interest.  I've followed up with Rohit who led the session at
> the design summit and we should see a sub-team ramping up soon.  The
> things we talked about the sub-team focusing on are in-line with moving

It sounds like the current model and process, while not perfect, isn't
too dysfunctional. Attempting to move the EC2 or GCE code into a
Stackforge repository might kill them before they can reach that bar
you're looking to set.

What more is needed from the blueprint or the patch authors to proceed?

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Salvatore Orlando
Hi,

As you might have noticed, there has been some progress on parallel tests
for neutron.
In a nutshell:
* Armando fixed the issue with IP address exhaustion on the public network
[1]
* Salvatore has now a patch which has a 50% success rate (the last failures
are because of me playing with it) [2]
* Salvatore is looking at putting back on track full isolation [3]
* All the bugs affecting parallel tests can be queried here [10]
* This blueprint tracks progress made towards enabling parallel testing [11]

-
The long story is as follows:
Parallel testing basically is not working because parallelism means higher
contention for public IP addresses. This was made worse by the fact that
some tests created a router with a gateway set but never deleted it. As a
result, there were even less addresses in the public range.
[1] was already merged and with [4] we shall make the public network for
neutron a /24 (the full tempest suite is still showing a lot of IP
exhaustion errors).

However, this was just one part of the issue. The biggest part actually
lied with the OVS agent and its interactions with the ML2 plugin. A few
patches ([5], [6], [7]) were already pushed to reduce the number of
notifications sent from the plugin to the agent. However, the agent is
organised in a way such that a notification is immediately acted upon thus
preempting the main agent loop, which is the one responsible for wiring
ports into networks. Considering the high level of notifications currently
sent from the server, this becomes particularly wasteful if one consider
that security membership updates for ports trigger global
iptables-save/restore commands which are often executed in rapid
succession, thus resulting in long delays for wiring VIFs to the
appropriate network.
With the patch [2] we are refactoring the agent to make it more efficient.
This is not production code, but once we'll get close to 100% pass for
parallel testing this patch will be split in several patches, properly
structured, and hopefully easy to review.
It is worth noting there is still work to do: in some cases the loop still
takes too long, and it has been observed ovs commands taking even 10
seconds to complete. To this aim, it is worth considering use of async
processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
limiting queries to ovs database.
We're still unable to explain some failures where the network appears to be
correctly wired (floating IP, router port, dhcp port, and VIF port), but
the SSH connection fails. We're hoping to reproduce this failure patter
locally.

Finally, the tempest patch for full tempest isolation should be made usable
soon. Having another experimental job for it is something worth considering
as for some reason it is not always easy reproducing the same failure modes
exhibited on the gate.

Regards,
Salvatore

[1] https://review.openstack.org/#/c/58054/
[2] https://review.openstack.org/#/c/57420/
[3] https://review.openstack.org/#/c/53459/
[4] https://review.openstack.org/#/c/58284/
[5] https://review.openstack.org/#/c/58860/
[6] https://review.openstack.org/#/c/58597/
[7] https://review.openstack.org/#/c/58415/
[8] https://review.openstack.org/#/c/45676/
[9] https://bugs.launchpad.net/neutron/+bug/1177973
[10]
https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY
[11] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-02 Thread David Kranz

On 12/02/2013 10:24 AM, Julien Danjou wrote:

On Fri, Nov 29 2013, David Kranz wrote:


In preparing to fail builds with log errors I have been trying to make
things easier for projects by maintaining a whitelist. But these bugs in
ceilometer are coming in so fast that I can't keep up. So I am  just putting
".*" in the white list for any cases I find before gate failing is turned
on, hopefully early this week.

Following the chat on IRC and the bug reports, it seems this might come
 From the tempest tests that are under reviews, as currently I don't
think Ceilometer generates any error as it's not tested.

So I'm not sure we want to whitelist anything?
So I tested this with https://review.openstack.org/#/c/59443/. There are 
flaky log errors coming from ceilometer. You
can see that the build at 12:27 passed, but the last build failed twice, 
each with a different set of errors. So the whitelist needs to remain 
and the ceilometer team should remove each entry when it is believed to 
be unnecessary.


The tricky part is going to be for us to fix Ceilometer on one side and
re-run Tempest reviews on the other side once a potential fix is merged.
This is another use case for the promised 
dependent-patch-between-projects thing.


 -David





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Vishvananda Ishaya
Very cool stuff!

Seeing your special glance properties for iso and floppy connections made me 
think
of something. They seem, but it would be nice if they were done in a way that 
would
work in any hypervisor.

I "think" we have sufficient detail in block_device_mapping to do essentially 
the
same thing, and it would be awesome to verify and add some nicities to the nova
cli, something like:

nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to blank 
image)

Clearly that requires a few things:

1) vix block device mapping support
2) cli ux improvements
3) testing!

Vish

On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
 wrote:

> Hi all,
> 
> At Cloudbase we are heavily using VMware Workstation and Fusion for 
> development, demos and PoCs, so we thought: why not replacing our automation 
> scripts with a fully functional Nova driver and use OpenStack APIs and Heat 
> for the automation? :-)
> 
> Here’s the repo for this Nova driver project: 
> https://github.com/cloudbase/nova-vix-driver/
> 
> The driver is already working well and supports all the basic features you’d 
> expect from a Nova driver, including a VNC console accessible via Horizon. 
> Please refer to the project README for additional details.
> The usage of CoW images (linked clones) makes deploying images particularly 
> fast, which is a good thing when you develop or run demos. Heat or Puppet, 
> Chef, etc make the whole process particularly sweet of course. 
> 
> 
> The main idea was to create something to be used in place of solutions like 
> Vagrant, with a few specific requirements:
> 
> 1) Full support for nested virtualization (VMX and EPT).
> 
> For the time being the VMware products are the only ones supporting Hyper-V 
> and KVM as guests, so this became a mandatory path, at least until EPT 
> support will be fully functional in KVM.
> This rules out Vagrant as an option. Their VMware support is not free and 
> beside that they don’t support nested virtualization (yet, AFAIK). 
> 
> Other workstation virtualization options, including VirtualBox and Hyper-V 
> are currently ruled out due to the lack of support for this feature as well.
> Beside that Hyper-V and VMware Workstation VMs can work side by side on 
> Windows 8.1, all you need is to fire up two nova-compute instances.
> 
> 2) Work on Windows, Linux and OS X workstations
> 
> Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
> connected to a Fusion VM console:
> 
> https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
> 
> 3) Use OpenStack APIs
> 
> We wanted to have a single common framework for automation and bring 
> OpenStack on the workstations. 
> Beside that, dogfooding is a good thing. :-) 
> 
> 4) Offer a free alternative for community contributions
>   
> VMware Player is fair enough, even with the “non commercial use” limits, etc.
> 
> Communication with VMware components is based on the freely available Vix SDK 
> libs, using ctypes to call the C APIs. The project provides a library to 
> easily interact with the VMs, in case it sould be needed, e.g.:
> 
> from vix import vixutils
> with vixutils.VixConnection() as conn:
> with conn.open_vm(vmx_path) as vm:
> vm.power_on()
> 
> We though about using libvirt, since it has support for those APIs as well, 
> but it was way easier to create a lightweight driver from scratch using the 
> Vix APIs directly.
> 
> TODOs:
> 
> 1) A minimal Neutron agent for attaching networks (now all networks are 
> attached to the NAT interface).
> 2) Resize disks on boot based on the flavor size
> 3) Volume attach / detach (we can just reuse the Hyper-V code for the Windows 
> case)
> 4) Same host resize
> 
> Live migration is not making particularly sense in this context, so the 
> implementation is not planned. 
> 
> Note: we still have to commit the unit tests. We’ll clean them during next 
> week and push them.
> 
> 
> As usual, any idea, suggestions and especially contributions are highly 
> welcome!
> 
> We’ll follow up with a blog post with some additional news related to this 
> project quite soon. 
> 
> 
> Thanks,
> 
> Alessandro
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why are we backporting low priority v3 api fixes to v2?

2013-12-02 Thread Matt Riedemann



On Monday, December 02, 2013 8:38:54 AM, Jonathan Proulx wrote:

On Mon, Dec 2, 2013 at 9:27 AM, Joe Gordon  wrote:





I don't think we should be blocking them per-se as long as they fit the API
change guidelines https://wiki.openstack.org/wiki/APIChangeGuidelines.


Agreed, possibly not what one would assign developers to do but as an
open project if it is important enough to someone that they've already
done the work why not accept the change?

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I +2'ed the patch that I originally referenced, so I'm not blocking 
anything.  I think the point I'm trying to make is I would hope we 
don't end up getting into a (backwards) dual maintenance type of 
situation where every low-priority fix that goes into the v3 API makes 
someone think that it needs to be backported to the v2 API.  I'm 
looking at it sort of like backporting patches to stable/havana - sure 
Havana could use some bug fixes that are made in Icehouse, but are they 
all really *needed*?  If there are people willing to do the work and 
people willing to review it, sure, go ahead I guess.


Anyway, I was just seeing a trend last week and maybe I was in a 
post-drunk-on-turkey stupor and felt I needed to bring it up.  I don't 
mean for this to be a big deal and I think Chris Yeoh answered it best 
so I'm OK with leaving it at that.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-12-02 Thread Vishvananda Ishaya

On Nov 29, 2013, at 9:24 PM, Chris Friesen  wrote:

> On 11/29/2013 06:37 PM, David Koo wrote:
>> On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
>>> We're currently running Grizzly (going to Havana soon) and we're
>>> running into an issue where if the active controller is ungracefully
>>> killed then nova-compute on the compute node doesn't properly
>>> connect to the new rabbitmq server on the newly-active controller
>>> node.
> 
>>> Interestingly, killing and restarting nova-compute on the compute
>>> node seems to work, which implies that the retry code is doing
>>> something less effective than the initial startup.
>>> 
>>> Has anyone doing HA controller setups run into something similar?
> 
> As a followup, it looks like if I wait for 9 minutes or so I see a message in 
> the compute logs:
> 
> 2013-11-30 00:02:14.756 1246 ERROR nova.openstack.common.rpc.common [-] 
> Failed to consume message from queue: Socket closed
> 
> It then reconnects to the AMQP server and everything is fine after that.  
> However, any instances that I tried to boot during those 9 minutes stay stuck 
> in the "BUILD" status.
> 
> 
>> 
>> So the rabbitmq server and the controller are on the same node?
> 
> Yes, they are.
> 
> > My
>> guess is that it's related to this bug 856764 (RabbitMQ connections
>> lack heartbeat or TCP keepalives). The gist of it is that since there
>> are no heartbeats between the MQ and nova-compute, if the MQ goes down
>> ungracefully then nova-compute has no way of knowing. If the MQ goes
>> down gracefully then the MQ clients are notified and so the problem
>> doesn't arise.
> 
> Sounds about right.
> 
>> We got bitten by the same bug a while ago when our controller node
>> got hard reset without any warning!. It came down to this bug (which,
>> unfortunately, doesn't have a fix yet). We worked around this bug by
>> implementing our own crude fix - we wrote a simple app to periodically
>> check if the MQ was alive (write a short message into the MQ, then
>> read it out again). When this fails n-times in a row we restart
>> nova-compute. Very ugly, but it worked!
> 
> Sounds reasonable.
> 
> I did notice a kombu heartbeat change that was submitted and then backed out 
> again because it was buggy. I guess we're still waiting on the real fix?

Hi Chris,

This general problem comes up a lot, and one fix is to use keepalives. Note 
that more is needed if you are using multi-master rabbitmq, but for failover I 
have had great success with the following (also posted to the bug):

When a connection to a socket is cut off completely, the receiving side doesn't 
know that the connection has dropped, so you can end up with a half-open 
connection. The general solution for this in linux is to turn on 
TCP_KEEPALIVES. Kombu will enable keepalives if the version number is high 
enough (>1.0 iirc), but rabbit needs to be specially configured to send 
keepalives on the connections that it creates.

So solving the HA issue generally involves a rabbit config with a section like 
the following:

[
 {rabbit, [{tcp_listen_options, [binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false},
{keepalive, true}]}
  ]}
].

Then you should also shorten the keepalive sysctl settings or it will still 
take ~2 hrs to terminate the connections:

echo "5" > /proc/sys/net/ipv4/tcp_keepalive_time
echo "5" > /proc/sys/net/ipv4/tcp_keepalive_probes
echo "1" > /proc/sys/net/ipv4/tcp_keepalive_intvl

Obviously this should be done in a sysctl config file instead of at the command 
line. Note that if you only want to shorten the rabbit keepalives but keep 
everything else as a default, you can use an LD_PRELOAD library to do so. For 
example you could use:

https://github.com/meebey/force_bind/blob/master/README

Vish

> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Working group on language packs

2013-12-02 Thread Clayton Coleman


- Original Message -
> Clayton, good detail documentation of the design approach @
> https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts
> 
> A few questions -
> 1. On the objective of supporting both heroku build packs and openshift
> cartridges: how compatible are these two with each other? i.e. is there
> enough common denominator between the two that Solum can be compatible with
> both?

A buildpack is:

1) an ubuntu base image with a set of preinstalled packages
2) a set of buildpack files downloaded into a specific location on disk
3) a copy of the application git repository downloaded into a specific location 
on disk
4) a script that is invoked to transform the buildpack files and git repository 
into a set of binaries (the deployment unit) in an output directory

Since the Solum proposal is a transformation of a base (#1) via a script (which 
can call #4) with injected app data (#2 and #3), we should have all the 
ingredients.

An openshift cartridge is:

1) a rhel base image with a set of preinstalled packages
   a) some additional tools and environment to execute cartridges
2) a metadata file that describes how the cartridge can be used (what the ports 
it exposes are, how it can be used)
3) a set of cartridge files that include binaries and control scripts
   a) a bin/install script that does setup of the binaries on disk, and the 
basics preparation
   b) a bin/control script that manages running processes and to do builds

An OpenShift cartridge would be a base image (#1) with a script that invokes 
the correct steps in the cartridge (#3) to create, build, and deploy the 
contents.

> 
> 2. What is the intended user persona for the "image author"? Is it the
> service operator, the developer end user, or could be both?

Yes, and there is a 3rd persona which could be a vendor packaging up their 
software in distributable format for consumption by multiple service operators. 
 Will add a section to the proposal.

These were discussed in the API meeting as well 
(http://irclogs.solum.io/2013/solum.2013-12-02-17.05.html), the additional info 
will be added to the proposal.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 12:46 PM, Monty Taylor wrote:
> On 12/02/2013 11:53 AM, Russell Bryant wrote:
>>>  * Scope
>>>  ** Project must have a clear and defined scope
>>
>> This is missing
>>
>>>  ** Project should not inadvertently duplicate functionality present in 
>>> other
>>> OpenStack projects. If they do, they should have a clear plan and 
>>> timeframe
>>> to prevent long-term scope duplication.
>>>  ** Project should leverage existing functionality in other OpenStack 
>>> projects
>>> as much as possible
>>
>> I'm going to hold off on diving into this too far until the scope is
>> clarified.
> 
> I'm not.
> 
> *snip*
> 

Ok, I can't help it now.

>>
>> The list looks reasonable right now.  Barbican should put migrating to
>> oslo.messaging on the Icehouse roadmap though.
> 
> *snip*

Yeahhh ... I looked and even though rpc and notifier are imported, they
do not appear to be used at all.

>>
>> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
>>
>> It looks like the only item here not in the global requirements is
>> Celery, which is licensed under a 3-clause BSD license.
> 
> I'd like to address the use of Celery.
> 
> WTF
> 
> Barbican has been around for 9 months, which means that it does not
> predate the work that has become oslo.messaging. It doesn't even try. It
> uses a completely different thing.
> 
> The use of celery needs to be replaced with oslo. Full stop. I do not
> believe it makes any sense to spend further time considering a project
> that's divergent on such a core piece. Which is a shame - because I
> think that Barbican is important and fills an important need and I want
> it to be in. BUT - We don't get to end-run around OpenStack project
> choices by making a new project on the side and then submitting it for
> incubation. It's going to be a pile of suck to fix this I'm sure, and
> I'm sure that it's going to delay getting actually important stuff done
> - but we deal with too much crazy as it is to pull in a non-oslo
> messaging and event substrata.
> 

Yeah, I'm afraid I agree with Monty here.  I didn't really address this
because I was trying to just do a first pass and not go too far into the
tech bits.

I think such a big divergence is going to be a hard sell for a number of
reasons.  It's a significant dependency that I don't think is justified.
 Further, it won't work in all of the same environments that OpenStack
works in today.  You can't use Celery with all of the same messaging
transports as oslo.messaging (or the older rpc lib).  One example is Qpid.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Monty Taylor


On 12/02/2013 11:53 AM, Russell Bryant wrote:
> On 12/02/2013 10:19 AM, Jarret Raim wrote:
>> All,
>>
>> Barbican is the OpenStack key management service and we’d like to
>> request incubation for the Icehouse release. A Rackspace sponsored team
>> has been working for about 9 months now, including following the Havana
>> release cycle for our first release.
>>
>> Our incubation request is here:
>> https://wiki.openstack.org/wiki/Barbican
>>
>> Our documentation is mostly hosted at GitHub for the moment, though we
>> are in the process of converting much of it to DocBook.
>> https://github.com/cloudkeep/barbican
>> https://github.com/cloudkeep/barbican/wiki
>>
>>
>> The Barbican team will be on IRC today at #openstack-barbican and you
>> can contact us using the barbi...@lists.rackspace.com mailing list if
>> desired.
> 
> The TC is currently working on formalizing requirements for new programs
> and projects [3].  I figured I would give them a try against this
> application.
> 
> First, I'm assuming that the application is for a new program that
> contains the new project.  The application doesn't make that bit clear,
> though.
> 
>> Teams in OpenStack can be created as-needed and grow organically. As the team
>> work matures, some technical efforts will be recognized as essential to the
>> completion of the OpenStack project mission. By becoming an official Program,
>> they place themselves under the authority of the OpenStack Technical
>> Committee. In return, their contributors get to vote in the Technical
>> Committee election, and they get some space and time to discuss future
>> development at our Design Summits. When considering new programs, the TC will
>> look into a number of criteria, including (but not limited to):
> 
>> * Scope
>> ** Team must have a specific scope, separated from others teams scope
> 
> I would like to see a statement of scope for Barbican on the
> application.  It should specifically cover how the scope differs from
> other programs, in particular the Identity program.
> 
>> ** Team must have a mission statement
> 
> This is missing.
>   
>> * Maturity
>> ** Team must already exist, have regular meetings and produce some work
> 
> This seems covered.
> 
>> ** Team should have a lead, elected by the team contributors
> 
> Was the PTL elected?  I can't seem to find record of that.  If not, I
> would like to see an election held for the PTL.
> 
>> ** Team should have a clear way to grant ATC (voting) status to its
>>significant contributors
> 
> Related to the above
> 
>> * Deliverables
>> ** Team should have a number of clear deliverables
> 
> barbican and python-barbicanclient, I presume.  It would be nice to have
> this clearly defined on the application.
> 
> 
> Now, for the project specific requirements:
> 
>>  Projects wishing to be included in the integrated release of OpenStack must
>>  first apply for incubation status. During their incubation period, they will
>>  be able to access new resources and tap into other OpenStack programs (in
>>  particular the Documentation, QA, Infrastructure and Release management 
>> teams)
>>  to learn about the OpenStack processes and get assistance in their 
>> integration
>>  efforts.
>>  
>>  The TC will evaluate the project scope and its complementarity with existing
>>  integrated projects and other official programs, look into the project
>>  technical choices, and check a number of requirements, including (but not
>>  limited to):
>>  
>>  * Scope
>>  ** Project must have a clear and defined scope
> 
> This is missing
> 
>>  ** Project should not inadvertently duplicate functionality present in other
>> OpenStack projects. If they do, they should have a clear plan and 
>> timeframe
>> to prevent long-term scope duplication.
>>  ** Project should leverage existing functionality in other OpenStack 
>> projects
>> as much as possible
> 
> I'm going to hold off on diving into this too far until the scope is
> clarified.

I'm not.

*snip*

>>  
>>  * Process
>>  ** Project must be hosted under stackforge (and therefore use git as its 
>> VCS)
> 
> I see that barbican is now on stackforge,  but python-barbicanclient is
> still on github.  Is that being moved soon?
> 
>>  ** Project must obey OpenStack coordinated project interface (such as tox,
>> pbr, global-requirements...)
> 
> Uses tox, but not pbr or global requirements

It's also pretty easy for a stackforge project to opt-in to the global
requirements sync job now too.

>>  ** Project should use oslo libraries or oslo-incubator where appropriate
> 
> The list looks reasonable right now.  Barbican should put migrating to
> oslo.messaging on the Icehouse roadmap though.

*snip*

> 
> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
> 
> It looks like the only item here not in the global requirements is
> Celery, which is licensed under a 3-clause BSD license.

I'd like to address the use of Celery.

WTF

Barbican has been aroun

Re: [openstack-dev] [Hacking] License headers in empty files

2013-12-02 Thread Julien Danjou
On Thu, Nov 28 2013, Julien Danjou wrote:

> On Thu, Nov 28 2013, Sean Dague wrote:
>
>> I'm totally in favor of going further and saying "empty files shouldn't
>> have license headers, because their content of emptiness isn't
>> copyrightable" [1]. That's just not how it's written today.
>
> I went ahead and sent a first patch:
>
>   https://review.openstack.org/#/c/59090/
>
> Help appreciated. :)

The patch is ready for review, but it also a bit stricter as it
completely disallows files with _only_ comments in them.

This is something that sounds like a good idea, but Joe wanted to bring
this to the mailing list for attention first, in case there would be a
problem.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday December 3rd at 19:00 UTC

2013-12-02 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday December 3rd, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Sylvain Bauza

Le 02/12/2013 18:12, Russell Bryant a écrit :

On 12/02/2013 10:59 AM, Gary Kotton wrote:

I think that this is certainly different. It is something that we we want
and need a user facing API.
Examples:
  - aggregates
  - per host scheduling
  - instance groups

Etc.

That is just taking the nova options into account and not the other
modules. How doul one configure that we would like to have storage
proximity for a VM? This is where things start to get very interesting and
enable the cross service scheduling (which is the goal of this no?).

An explicit part of this plan is that all of the things you're talking
about are *not* in scope until the forklift is complete and the new
thing is a functional replacement for the existing nova-scheduler.  We
want to get the project established and going so that it is a place
where this work can take place.  We do *not* want to slow down the work
of getting the project established by making these things a prerequisite.

While I can understand we need to secure the forklift, I also think it 
would be a good thing to step up defining what would be the 
scheduling-as-a-service in the next steps even if they are not planned yet.


There is already an RPC interface defining what *is* the scheduler, my 
concern is just to make sure this API (RPC or REST anyway) wouldn't it 
be too sticked to Nova instances so that planning to deliver for Cinder 
volumes or Nova hosts would be hard.
In other terms, having the RPC methods enough generic to say "add this 
entity to this group" or "elect entities from this group" would be fine.


-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [Security]

2013-12-02 Thread Dolph Mathews
On Wed, Nov 27, 2013 at 10:58 AM, Paul Montgomery <
paul.montgom...@rackspace.com> wrote:

> I created some relatively high level security best practices that I
> thought would apply to Solum.  I don't think it is ever too early to get
> mindshare around security so that developers keep that in mind throughout
> the project.  When a design decision point could easily go two ways,
> perhaps these guidelines can sway direction towards a more secure path.
>
> This is a living document, please contribute and let's discuss topics.
> I've worn a security hat in various jobs so I'm always interested. :)
> Also, I realize that many of these features may not directly be
> encapsulated by Solum but rather components such as KeyStone or Horizon.
>
> https://wiki.openstack.org/wiki/Solum/Security
>
> I would like to build on this list and create blueprints or tasks based on
> topics that the community agrees upon.  We will also need to start
> thinking about timing of these features.
>
> Is there an OpenStack standard for code comments that highlight potential
> security issues to investigate at a later point?  If not, what would the
> community think of making a standard for Solum?  I would like to identify
> these areas early while the developer is still engaged/thinking about the
> code.  It is always harder to go back later and find everything in my
> experience.  Perhaps something like:
>
> # (SECURITY) This exception may contain database field data which could
> expose passwords to end users unless filtered.
>
> Or
>
> # (SECURITY) The admin password is read in plain text from a configuration
> file.  We should fix this later.
>

For known weaknesses such as this one, I'd suggest a FIXME with a bug
number referencing a Public Security bug. The bug can be filed ahead of the
patchset merging, and link to the review proposing the FIXME.


>
> Regards,
> Paulmo
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:59 AM, Gary Kotton wrote:
> I think that this is certainly different. It is something that we we want
> and need a user facing API.
> Examples:
>  - aggregates
>  - per host scheduling
>  - instance groups
> 
> Etc.
> 
> That is just taking the nova options into account and not the other
> modules. How doul one configure that we would like to have storage
> proximity for a VM? This is where things start to get very interesting and
> enable the cross service scheduling (which is the goal of this no?).

An explicit part of this plan is that all of the things you're talking
about are *not* in scope until the forklift is complete and the new
thing is a functional replacement for the existing nova-scheduler.  We
want to get the project established and going so that it is a place
where this work can take place.  We do *not* want to slow down the work
of getting the project established by making these things a prerequisite.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Compute Node Interfaces configuration in Nova Configuration file

2013-12-02 Thread Balaji P
Hi,

Can anyone share the current deployment use case for compute Node Interfaces 
configuration in Nova configuration file where hundreads of compute nodes [with 
two or more physical interfaces] in DC network has to be configured with 
IPAddress for Management Networks and Data Networks.

Is there any other tool used for this purpose OR manual configuration has to be 
done?

Regards,
Balaji.P


From: Balaji Patnala [mailto:patnala...@gmail.com]
Sent: Tuesday, November 19, 2013 4:21 PM
To: OpenStack Development Mailing List
Cc: Addepalli Srini-B22160; B Veera-B37207; P Balaji-B37839; Lingala Srikanth 
Kumar-B37208; Somanchi Trinath-B39208; Mannidi Purandhar Sairam-B39209
Subject: [openstack-dev] [nova] Static IPAddress configuration in nova.conf file

Hi,
Nova-compute in Compute nodes send fanout_cast to the scheduler in Controlle 
Node once every 60 seconds.  Configuration file Nova.conf in Compute Node has 
to be configured with Management Network IP address and there is no provision 
to configure Data Network IP address in the configuration file. But if there is 
any change in the IPAddress for these Management Network Interface and Data 
Network Interface, then we have to configure them  manually in the 
configuration file of compute node.
We would like to create BP to address this issue of static configuration of 
IPAddress for Management Network Interface and Data Network Interface of 
Compute Node by providing the interface names in the nova.conf file.
So that any change in the ipaddress for these interfaces will be updated 
dynamically in the fanout_cast  message to the Controller and update the DB.
We came to know that the current deployments are using some scripts to handle 
this static ipaddress configuration in nova.conf file.
Any comments/suggestions will be useful.
Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:41 AM, Thierry Carrez wrote:
> I don't really care that much about deprecation in that case, but I care
> about which release the new project is made part of. Would you make it
> part of the Icehouse common release ? That means fast-tracking through
> incubation *and* integration in less than one cycle... I'm not sure we
> want that.
> 
> I agree it's the same code (at least at the beginning), but the idea
> behind forcing all projects to undergo a full cycle before being made
> part of the release is not really about code stability, it's about
> integration with the other projects and all the various programs. We
> want them to go through a whole cycle to avoid putting unnecessary
> stress on packagers, QA, docs, infrastructure and release management.
> 
> So while I agree that we could play tricks around deprecation, I'm not
> sure we should go from forklifted to part of the common release in less
> than 3 months.
> 
> I'm not sure it would buy us anything, either. Having the scheduler
> usable by the end of the Icehouse cycle and integrated in the J cycle
> lets you have one release where both options are available, remove it
> first thing in J and then anyone running J (be it tracking trunk or
> using the final release) is using the external scheduler. That makes
> more sense to me and technically, you still have the option to use it
> with Icehouse.
> 

Not having to maintain code in 2 places is what it buys us.  However,
this particular point is a bit moot until we actually had it done and
working.  Perhaps we should just revisit the deprecation plan once we
actually have the thing split out and running.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database] Update compute_nodes table

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:47 AM, Abbass MAROUNI wrote:
> Hello,
> 
> I'm looking for way to a add new attribute to the compute nodes by
>  adding a column to the compute_nodes table in nova database in order to
> track a metric on the compute nodes and use later it in nova-scheduler.
> 
> I checked the  sqlalchemy/migrate_repo/versions and thought about adding
> my own upgrade then sync using nova-manage db sync. 
> 
> My question is : 
> What is the process of upgrading a table in the database ? Do I have to
> modify or add a new variable in some class in order to associate the
> newly added column with a variable that I can use ? 

Don't add this.  :-)

There is work in progress to just have a column with a json blob in it
for additional metadata like this.

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://wiki.openstack.org/wiki/ExtensibleResourceTracking

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting meetings - 12/02/2013

2013-12-02 Thread Renat Akhmerov
Hi,

Thanks for joining our community meeting today at #openstack-meeting!

Here’re the links to the meeting minutes and the full log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-02-16.00.html
Logs: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-02-16.00.log.html

Feel free to join us next time!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:53 AM, Russell Bryant wrote:
>>  ** Project must have no library dependencies which effectively restrict how
>> the project may be distributed [1]
> 
> http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
> 
> It looks like the only item here not in the global requirements is
> Celery, which is licensed under a 3-clause BSD license.
> 
> https://github.com/celery/celery/blob/master/LICENSE
> 
> A notable point is this clause:
> 
>   * Redistributions in binary form must reproduce the above copyright
> notice, this list of conditions and the following disclaimer in the
> documentation and/or other materials provided with the distribution.
> 
> I'm not sure if we have other dependencies using this license already.
> It's also not clear how to interpret this when Python is always
> distributed as source.  We can take this up on the legal-discuss mailing
> list.

If you have comments on this point, please jump over to the
legal-discuss list and respond to this thread:

http://lists.openstack.org/pipermail/legal-discuss/2013-December/000106.html

We can post the outcome back to the -dev list once that thread reaches a
conclusion.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:19 AM, Jarret Raim wrote:
> All,
> 
> Barbican is the OpenStack key management service and we’d like to
> request incubation for the Icehouse release. A Rackspace sponsored team
> has been working for about 9 months now, including following the Havana
> release cycle for our first release.
> 
> Our incubation request is here:
> https://wiki.openstack.org/wiki/Barbican
> 
> Our documentation is mostly hosted at GitHub for the moment, though we
> are in the process of converting much of it to DocBook.
> https://github.com/cloudkeep/barbican
> https://github.com/cloudkeep/barbican/wiki
> 
> 
> The Barbican team will be on IRC today at #openstack-barbican and you
> can contact us using the barbi...@lists.rackspace.com mailing list if
> desired.

The TC is currently working on formalizing requirements for new programs
and projects [3].  I figured I would give them a try against this
application.

First, I'm assuming that the application is for a new program that
contains the new project.  The application doesn't make that bit clear,
though.

> Teams in OpenStack can be created as-needed and grow organically. As the team
> work matures, some technical efforts will be recognized as essential to the
> completion of the OpenStack project mission. By becoming an official Program,
> they place themselves under the authority of the OpenStack Technical
> Committee. In return, their contributors get to vote in the Technical
> Committee election, and they get some space and time to discuss future
> development at our Design Summits. When considering new programs, the TC will
> look into a number of criteria, including (but not limited to):

> * Scope
> ** Team must have a specific scope, separated from others teams scope

I would like to see a statement of scope for Barbican on the
application.  It should specifically cover how the scope differs from
other programs, in particular the Identity program.

> ** Team must have a mission statement

This is missing.

> * Maturity
> ** Team must already exist, have regular meetings and produce some work

This seems covered.

> ** Team should have a lead, elected by the team contributors

Was the PTL elected?  I can't seem to find record of that.  If not, I
would like to see an election held for the PTL.

> ** Team should have a clear way to grant ATC (voting) status to its
>significant contributors

Related to the above

> * Deliverables
> ** Team should have a number of clear deliverables

barbican and python-barbicanclient, I presume.  It would be nice to have
this clearly defined on the application.


Now, for the project specific requirements:

>  Projects wishing to be included in the integrated release of OpenStack must
>  first apply for incubation status. During their incubation period, they will
>  be able to access new resources and tap into other OpenStack programs (in
>  particular the Documentation, QA, Infrastructure and Release management 
> teams)
>  to learn about the OpenStack processes and get assistance in their 
> integration
>  efforts.
>  
>  The TC will evaluate the project scope and its complementarity with existing
>  integrated projects and other official programs, look into the project
>  technical choices, and check a number of requirements, including (but not
>  limited to):
>  
>  * Scope
>  ** Project must have a clear and defined scope

This is missing

>  ** Project should not inadvertently duplicate functionality present in other
> OpenStack projects. If they do, they should have a clear plan and 
> timeframe
> to prevent long-term scope duplication.
>  ** Project should leverage existing functionality in other OpenStack projects
> as much as possible

I'm going to hold off on diving into this too far until the scope is
clarified.

>  * Maturity
>  ** Project should have a diverse and active team of contributors

Using a mailmap file [4]:

$ git shortlog -s -e | sort -n -r
   172  John Wood 
   150  jfwood 
65  Douglas Mendizabal 
39  Jarret Raim 
17  Malini K. Bhandaru 
10  Paul Kehrer 
10  Jenkins 
 8  jqxin2006 
 7  Arash Ghoreyshi 
 5  Chad Lung 
 3  Dolph Mathews 
 2  John Vrbanac 
 1  Steven Gonzales 
 1  Russell Bryant 
 1  Bryan D. Payne 

It appears to be an effort done by a group, and not an individual.  Most
commits by far are from Rackspace, but there is at least one non-trivial
contributor (Malini) from another company (Intel), so I think this is OK.

>  ** Project should not have a major architectural rewrite planned. API should
> be reasonably stable.

Thoughts from the Barbican team on this?

>  
>  * Process
>  ** Project must be hosted under stackforge (and therefore use git as its VCS)

I see that barbican is now on stackforge,  but python-barbicanclient is
still on github.  Is that being moved soon?

>  ** Project must obey OpenStack coordinated project interface (such as tox,
> pbr, global-requirements...)

Uses tox, but not

Re: [openstack-dev] [Solum] Working group on language packs

2013-12-02 Thread Roshan Agrawal
Clayton, good detail documentation of the design approach @ 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts

A few questions -
1. On the objective of supporting both heroku build packs and openshift 
cartridges: how compatible are these two with each other? i.e. is there enough 
common denominator between the two that Solum can be compatible with both?

2. What is the intended user persona for the "image author"? Is it the service 
operator, the developer end user, or could be both?


> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: Friday, November 22, 2013 5:09 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Solum] Working group on language packs
> 
> The breakout meeting schedule poll is published here (dates will be
> published here once they are selected):
> https://wiki.openstack.org/wiki/Solum/BreakoutMeetings#Language_Pack_
> Working_Group_Series
> 
> If you would like to join the working group, please:
> 
> 1) Subscribe to the lang-pack blueprint:
> https://blueprints.launchpad.net/solum/+spec/lang-pack
> 2) Participate in the doodle poll currently published on the wiki page
> referenced above to help select the best time for the meetings.
> 
> Thanks,
> 
> Adrian
> 
> 
> For those interested in joining this working group, On Nov 22, 2013, at 8:34
> AM, Clayton Coleman  wrote:
> 
> > I have updated the language pack (name subject to change) blueprint with
> the outcomes from the face2face meetings, and drafted a specification that
> captures the discussion so far.  The spec is centered around the core idea of
> transitioning base images into deployable images (that can be stored in Nova
> and sent to Glance).  These are *DRAFT* and are intended for public debate.
> >
> > https://blueprints.launchpad.net/solum/+spec/lang-pack
> > https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSource
> > IntoDeploymentArtifacts
> >
> > Please take this opportunity to review these documents and offer criticism
> and critique via the ML - I will schedule a follow up deep dive for those who
> expressed interest in participation [1] after US Thanksgiving.
> >
> > [1] https://etherpad.openstack.org/p/SolumWorkshopTrack1Notes
> >
> >
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Thierry Carrez
Monty Taylor wrote:
> On 12/02/2013 09:13 AM, Russell Bryant wrote:
>> On 11/29/2013 10:01 AM, Thierry Carrez wrote:
>>> Robert Collins wrote:
 https://etherpad.openstack.org/p/icehouse-external-scheduler
>>>
>>> Just looked into it with release management / TC hat on and I have a
>>> (possibly minor) concern on the deprecation path/timing.
>>>
>>> Assuming everything goes well, the separate scheduler will be
>>> fast-tracked through incubation in I, graduate at the end of the I cycle
>>> to be made a fully-integrated project in the J release.
>>>
>>> Your deprecation path description mentions that the internal scheduler
>>> will be deprecated in I, although there is no "released" (or
>>> security-supported) alternative to switch to at that point. It's not
>>> until the J release that such an alternative will be made available.
>>>
>>> So IMHO for the release/security-oriented users, the switch point is
>>> when they start upgrading to J, and not the final step of their upgrade
>>> to I (as suggested by the "deploy the external scheduler and switch over
>>> before you consider your migration to I complete" wording in the
>>> Etherpad). As the first step towards *switching to J* you would install
>>> the new scheduler before upgrading Nova itself. That works whether
>>> you're a CD user (and start deploying pre-J stuff just after the I
>>> release), or a release user (and wait until J final release to switch to
>>> it).
>>>
>>> Maybe we are talking about the same thing (the migration to the separate
>>> scheduler must happen after the I release and, at the latest, when you
>>> switch to the J release) -- but I wanted to make sure we were on the
>>> same page.
>>
>> Sounds good to me.
>>
>>> I also assume that all the other "scheduler-consuming" projects would
>>> develop the capability to talk to the external scheduler during the J
>>> cycle, so that their own schedulers would be deprecated in J release and
>>> removed at the start of H. That would be, to me, the condition to
>>> considering the external scheduler as "integrated" with (even if not
>>> mandatory for) the rest of the common release components.
>>>
>>> Does that work for you ?
>>
>> I would change "all the other" to "at least one other" here.  I think
>> once we prove that a second project can be integrated into it, the
>> project is ready to be integrated.  Adding support for even more
>> projects is something that will continue to happen over a longer period
>> of time, I suspect, especially since new projects are coming in every cycle.
> 
> Just because I'd like to argue - if what we do here is an actual
> forklift, do we really need a cycle of deprecation?
> 
> The reason I ask is that this is, on first stab, not intended to be a
> service that has user-facing API differences. It's a reorganization of
> code from one repo into a different one. It's very strongly designed to
> not be different. It's not even adding a new service like conductor was
> - it's simply moving the repo where the existing service is held.
> 
> Why would we need/want to deprecate? I say that if we get the code
> ectomied and working before nova feature freeze, that we elevate the new
> nova repo and delete the code from nova. Process for process sake here
> I'm not sure gets us anywhere.

I don't really care that much about deprecation in that case, but I care
about which release the new project is made part of. Would you make it
part of the Icehouse common release ? That means fast-tracking through
incubation *and* integration in less than one cycle... I'm not sure we
want that.

I agree it's the same code (at least at the beginning), but the idea
behind forcing all projects to undergo a full cycle before being made
part of the release is not really about code stability, it's about
integration with the other projects and all the various programs. We
want them to go through a whole cycle to avoid putting unnecessary
stress on packagers, QA, docs, infrastructure and release management.

So while I agree that we could play tricks around deprecation, I'm not
sure we should go from forklifted to part of the common release in less
than 3 months.

I'm not sure it would buy us anything, either. Having the scheduler
usable by the end of the Icehouse cycle and integrated in the J cycle
lets you have one release where both options are available, remove it
first thing in J and then anyone running J (be it tracking trunk or
using the final release) is using the external scheduler. That makes
more sense to me and technically, you still have the option to use it
with Icehouse.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][database] Update compute_nodes table

2013-12-02 Thread Abbass MAROUNI
Hello,

I'm looking for way to a add new attribute to the compute nodes by  adding
a column to the compute_nodes table in nova database in order to track a
metric on the compute nodes and use later it in nova-scheduler.

I checked the  sqlalchemy/migrate_repo/versions and thought about adding my
own upgrade then sync using nova-manage db sync.

My question is :
What is the process of upgrading a table in the database ? Do I have to
modify or add a new variable in some class in order to associate the newly
added column with a variable that I can use ?

Best regards,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-12-02 Thread Matt Wagner
On Sun Dec  1 00:27:30 2013, Tzu-Mainn Chen wrote:

> I think it's far more important that we list out requirements and
> create a design document that people agree upon first.  Otherwise, we
> run the risk of focusing on feature X for release 1 without ensuring
> that our architecture supports feature Y for release 2.

+1 to this.

I think that lifeless'
https://etherpad.openstack.org/p/tripleo-feature-map pad might be a good
way to get moving in that direction.


> The point of disagreement here - which actually seems quite minor to
> me - is how far we want to go in defining heterogeneity.  Are existing
> node attributes such as cpu and memory enough?  Or do we need to go
> further?  To take examples from this thread, some additional
> possibilities include: rack, network connectivity, etc.  Presumably,
> such attributes will be user defined and managed within TripleO itself.

I took the point of disagreement more about the allowance of manual
control. Should a user be able to override the list of what gets
provisioned, where?

And I don't think you always want heterogeneity. For example, if we
treat 'rack' as one of those attributes, a system administrator might
specifically want things to NOT share a rack, e.g. for redundancy.

That said, I suspect that many of us (myself included) have never
designed a data center, so I worry that some of our examples might be a
bit contrived. Not necessarily just for this conversation, but I think
it'd be handy to have real-world stories here. I'm sure no two are
identical, but it'd help make sure we're focused on real-world scenarios.


>
> If that understanding is correct, it seems to me that the requirements
> are broadly in agreement, and that "TripleO defined node attributes"
> is a feature that can easily be slotted into this sort of
> architecture.  Whether it needs to come first. . . should be a
> different discussion (my gut feel is that it shouldn't come first, as
> it depends on everything else working, but maybe I'm wrong).

So to me, that question -- what should come first? -- is exactly what
started this discussion. It didn't start out as a question of whether we
should allow users to override the schedule, but as a question of where
we should start building. Should we start off just letting Nova
scheduler do all the hard work for us and let overrides maybe come in
later? Or should we we start off requiring that everything is manual and
later transition to using Nova? (I don't have a strong opinion either
way, but I hope we land one way or the other soon.)

-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] L7 rules design

2013-12-02 Thread Avishay Balderman
1)  Done

2)  Done

3)  Attaching a pool to a vip is a private use case. The 'action' can also 
'reject the traffic' or something else. So the 'Action' ,ay tell that we need 
to attach Vip X to Pool Y

4)  Not sure .. It is an open discussion for now.

5)  See #4

   Yes - CRUD operation should be supported as well for the policy and rules

Thanks

Avishay

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Friday, November 22, 2013 5:24 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] L7 rules design

Hi Avishay, lbaas folks,

I've reviewed the wiki and have some questions/suggestions:

1) Looks like L7Policy is lacking 'name' attribute in it's description. However 
i see little benefit of having a name for this object
2) lbaas-related neutron client commands start with lb-, please fix this.
3) How does L7Policy specifies relation of vip and pool?
4) How default pool will be associated with the vip? Will it be a l7 rule of 
special kind?
It is not quite clear what does mean associating vip with pool with policy if 
each rule in the policy contains 'SelectedPool' attribute.
5) what is 'action'? What other actions except SELECT_POOL can be?

In fact my suggestion will be slightly different:
- Instead of having L7Policy which i believe serves only to rules grouping, we 
introduce VipPoolAssociation as follows:

VipPoolAssociation
 id,
 vip_id,
 pool_id,
 default - boolean flag for default pool for the vip

L7Rule then will have vippoolassociation_id (it's better to shorten the attr 
name) just like it has policy_id in your proposal. L7Rule doesn't need 
SelectedPool attribute since once it attached to VipPoolAssociation, the rule 
then points to the pool with pool_id of that association.
In other words, VipPoolAssociation is almost the same as L7Policy but named 
closer to it's primary goal of associating vips and pools.

I would also suggest to add add/delete-rule operation for the association.

What do you think?

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Joe Gordon
On Mon, Dec 2, 2013 at 7:26 AM, Doug Hellmann
wrote:

>
>
>
> On Mon, Dec 2, 2013 at 10:05 AM, Joe Gordon  wrote:
>
>>
>>
>>
>> On Mon, Dec 2, 2013 at 6:37 AM, Doug Hellmann <
>> doug.hellm...@dreamhost.com> wrote:
>>
>>>
>>>
>>>
>>> On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon wrote:
>>>



 On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant wrote:

> On 12/02/2013 08:53 AM, Doug Hellmann wrote:
> >
> >
> >
> > On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
> > mailto:doug.hellm...@dreamhost.com>>
> wrote:
> >
> >
> >
> >
> > On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant <
> rbry...@redhat.com
> > > wrote:
> >
> > On 11/29/2013 01:39 PM, Doug Hellmann wrote:
> > > We have a review up (
> https://review.openstack.org/#/c/58297/)
> > to add
> > > some features to the notification system in the oslo
> > incubator. THe
> > > notification system is being moved into oslo.messaging,
> and so
> > we have
> > > the question of whether to accept the patch to the
> incubated
> > version,
> > > move it to oslo.messaging, or carry it in both.
> > >
> > > As I say in the review, from a practical standpoint I
> think we
> > can't
> > > really support continued development in both places. Given
> the
> > number of
> > > times the topic of "just make everything a library" has
> come
> > up, I would
> > > prefer that we focus our energy on completing the
> transition
> > for a given
> > > module or library once it the process starts. We also need
> to
> > avoid
> > > feature drift, and provide a clear incentive for projects
> to
> > update to
> > > the new library.
> > >
> > > Based on that, I would like to say that we do not add new
> > features to
> > > incubated code after it starts moving into a library, and
> only
> > provide
> > > "stable-like" bug fix support until integrated projects are
> > moved over
> > > to the graduated library (although even that is up for
> > discussion).
> > > After all integrated projects that use the code are using
> the
> > library
> > > instead of the incubator, we can delete the module(s) from
> the
> > incubator.
> > >
> > > Before we make this policy official, I want to solicit
> > feedback from the
> > > rest of the community and the Oslo core team.
> >
> > +1 in general.
> >
> > You may want to make "after it starts moving into a library"
> more
> > specific, though.
> >
> >
> > I think my word choice is probably what threw Sandy off, too.
> >
> > How about "after it has been moved into a library with at least a
> > release candidate published"?
>
> Sure, that's better.  That gives a specific bit of criteria for when
> the
> switch is flipped.
>
> >
> >
> >  One approach could be to reflect this status in the
> > MAINTAINERS file.  Right now there is a status field for each
> > module in
> > the incubator:
> >
> >
> >  S: Status, one of the following:
> >   Maintained:  Has an active maintainer
> >   Orphan:  No current maintainer, feel free to step
> up!
> >   Obsolete:Replaced by newer code, or a dead end, or
> > out-dated
> >
> > It seems that the types of code we're talking about should
> just be
> > marked as Obsolete.  Obsolete code should only get
> stable-like
> > bug fixes.
> >
> > That would mean marking 'rpc' and 'notifier' as Obsolete
> (currently
> > listed as Maintained).  I think that is accurate, though.
> >
> >
> > Good point.
>
> So, to clarify, possible flows would be:
>
> 1) An API moving to a library as-is, like rootwrap
>
>Status: Maintained
>-> Status: Graduating (short term)
>-> Code removed from oslo-incubator once library is released
>
> 2) An API being replaced with a better one, like rpc being replaced by
> oslo.messaging
>
>Status: Maintained
>-> Status: Obsolete (once an RC of a replacement lib has been
> released)
>-> Code removed from oslo-incubator once all integrated projects
> have
> been migrated off of the obsolete code
>
>
> Does that match your view?
>
>>>

Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Gregory Holt
On Dec 2, 2013, at 9:48 AM, Christian Schwede  
wrote:

> That sounds great! Is someone already working on this (I know about the 
> ongoing DiskFile refactoring) or even a blueprint available?

There is https://blueprints.launchpad.net/swift/+spec/ring-doubling though I'm 
uncertain how up to date it is. Everybody works on everything ;) but Peter 
Portante has been the point on the DiskFile refactoring and I have been for the 
SSync part. David Hadas will probably kick back in once we (Peter and I) get a 
bit further down the line on our parts.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 10:26 AM, Doug Hellmann
wrote:

>
>
>
> On Mon, Dec 2, 2013 at 10:05 AM, Joe Gordon  wrote:
>
>>
>>
>>
>> On Mon, Dec 2, 2013 at 6:37 AM, Doug Hellmann <
>> doug.hellm...@dreamhost.com> wrote:
>>
>>>
>>>
>>>
>>> On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon wrote:
>>>



 On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant wrote:

> On 12/02/2013 08:53 AM, Doug Hellmann wrote:
> >
> >
> >
> > On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
> > mailto:doug.hellm...@dreamhost.com>>
> wrote:
> >
> >
> >
> >
> > On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant <
> rbry...@redhat.com
> > > wrote:
> >
> > On 11/29/2013 01:39 PM, Doug Hellmann wrote:
> > > We have a review up (
> https://review.openstack.org/#/c/58297/)
> > to add
> > > some features to the notification system in the oslo
> > incubator. THe
> > > notification system is being moved into oslo.messaging,
> and so
> > we have
> > > the question of whether to accept the patch to the
> incubated
> > version,
> > > move it to oslo.messaging, or carry it in both.
> > >
> > > As I say in the review, from a practical standpoint I
> think we
> > can't
> > > really support continued development in both places. Given
> the
> > number of
> > > times the topic of "just make everything a library" has
> come
> > up, I would
> > > prefer that we focus our energy on completing the
> transition
> > for a given
> > > module or library once it the process starts. We also need
> to
> > avoid
> > > feature drift, and provide a clear incentive for projects
> to
> > update to
> > > the new library.
> > >
> > > Based on that, I would like to say that we do not add new
> > features to
> > > incubated code after it starts moving into a library, and
> only
> > provide
> > > "stable-like" bug fix support until integrated projects are
> > moved over
> > > to the graduated library (although even that is up for
> > discussion).
> > > After all integrated projects that use the code are using
> the
> > library
> > > instead of the incubator, we can delete the module(s) from
> the
> > incubator.
> > >
> > > Before we make this policy official, I want to solicit
> > feedback from the
> > > rest of the community and the Oslo core team.
> >
> > +1 in general.
> >
> > You may want to make "after it starts moving into a library"
> more
> > specific, though.
> >
> >
> > I think my word choice is probably what threw Sandy off, too.
> >
> > How about "after it has been moved into a library with at least a
> > release candidate published"?
>
> Sure, that's better.  That gives a specific bit of criteria for when
> the
> switch is flipped.
>
> >
> >
> >  One approach could be to reflect this status in the
> > MAINTAINERS file.  Right now there is a status field for each
> > module in
> > the incubator:
> >
> >
> >  S: Status, one of the following:
> >   Maintained:  Has an active maintainer
> >   Orphan:  No current maintainer, feel free to step
> up!
> >   Obsolete:Replaced by newer code, or a dead end, or
> > out-dated
> >
> > It seems that the types of code we're talking about should
> just be
> > marked as Obsolete.  Obsolete code should only get
> stable-like
> > bug fixes.
> >
> > That would mean marking 'rpc' and 'notifier' as Obsolete
> (currently
> > listed as Maintained).  I think that is accurate, though.
> >
> >
> > Good point.
>
> So, to clarify, possible flows would be:
>
> 1) An API moving to a library as-is, like rootwrap
>
>Status: Maintained
>-> Status: Graduating (short term)
>-> Code removed from oslo-incubator once library is released
>
> 2) An API being replaced with a better one, like rpc being replaced by
> oslo.messaging
>
>Status: Maintained
>-> Status: Obsolete (once an RC of a replacement lib has been
> released)
>-> Code removed from oslo-incubator once all integrated projects
> have
> been migrated off of the obsolete code
>
>
> Does that match your view?
>
>>

Re: [openstack-dev] [neutron] [ml2] Proposing a new time for the ML2 sub-team meeting

2013-12-02 Thread Kyle Mestery (kmestery)
FYI:

I have moved the ML2 meeting per my email below. The new
timeslot per the meeting page [1] is Wednesday at 1600UTC
on #openstack-meeting-alt.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting

On Nov 26, 2013, at 9:26 PM, Kyle Mestery (kmestery)  wrote:
> Folks:
> 
> I'd like to propose moving the weekly ML2 meeting [1]. The new
> proposed time is Wednesdays at 1600 UTC, and will be in the
> #openstack-meeting-alt channel. Please respond back if this time
> conflicts for you, otherwise starting next week on 12-4-2013
> we'll have the meeting at the new time.
> 
> We will have a short meeting tomorrow at 1400 UTC in the
> #openstack-meeting channel to cover action items from last
> week and some proposed VIF changes.
> 
> Thanks!
> Kyle
> 
> [1] https://wiki.openstack.org/wiki/Meetings/ML2
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Gary Kotton


On 12/2/13 5:33 PM, "Monty Taylor"  wrote:

>
>
>On 12/02/2013 09:13 AM, Russell Bryant wrote:
>> On 11/29/2013 10:01 AM, Thierry Carrez wrote:
>>> Robert Collins wrote:
 
https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.o
rg/p/icehouse-external-scheduler&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=
eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=nlm44OzEwIEFTzGCf
k6Dx1Lc0g7KHrY0h78JGykLd0s%3D%0A&s=0b234ac4dbe29b80b61b0d53be18362c3743
299f908abc97941bfdb6f4c0c9da
>>>
>>> Just looked into it with release management / TC hat on and I have a
>>> (possibly minor) concern on the deprecation path/timing.
>>>
>>> Assuming everything goes well, the separate scheduler will be
>>> fast-tracked through incubation in I, graduate at the end of the I
>>>cycle
>>> to be made a fully-integrated project in the J release.
>>>
>>> Your deprecation path description mentions that the internal scheduler
>>> will be deprecated in I, although there is no "released" (or
>>> security-supported) alternative to switch to at that point. It's not
>>> until the J release that such an alternative will be made available.
>>>
>>> So IMHO for the release/security-oriented users, the switch point is
>>> when they start upgrading to J, and not the final step of their upgrade
>>> to I (as suggested by the "deploy the external scheduler and switch
>>>over
>>> before you consider your migration to I complete" wording in the
>>> Etherpad). As the first step towards *switching to J* you would install
>>> the new scheduler before upgrading Nova itself. That works whether
>>> you're a CD user (and start deploying pre-J stuff just after the I
>>> release), or a release user (and wait until J final release to switch
>>>to
>>> it).
>>>
>>> Maybe we are talking about the same thing (the migration to the
>>>separate
>>> scheduler must happen after the I release and, at the latest, when you
>>> switch to the J release) -- but I wanted to make sure we were on the
>>> same page.
>> 
>> Sounds good to me.
>> 
>>> I also assume that all the other "scheduler-consuming" projects would
>>> develop the capability to talk to the external scheduler during the J
>>> cycle, so that their own schedulers would be deprecated in J release
>>>and
>>> removed at the start of H. That would be, to me, the condition to
>>> considering the external scheduler as "integrated" with (even if not
>>> mandatory for) the rest of the common release components.
>>>
>>> Does that work for you ?
>> 
>> I would change "all the other" to "at least one other" here.  I think
>> once we prove that a second project can be integrated into it, the
>> project is ready to be integrated.  Adding support for even more
>> projects is something that will continue to happen over a longer period
>> of time, I suspect, especially since new projects are coming in every
>>cycle.
>
>Just because I'd like to argue - if what we do here is an actual
>forklift, do we really need a cycle of deprecation?
>
>The reason I ask is that this is, on first stab, not intended to be a
>service that has user-facing API differences. It's a reorganization of
>code from one repo into a different one. It's very strongly designed to
>not be different. It's not even adding a new service like conductor was
>- it's simply moving the repo where the existing service is held.

I think that this is certainly different. It is something that we we want
and need a user facing API.
Examples:
 - aggregates
 - per host scheduling
 - instance groups

Etc.

That is just taking the nova options into account and not the other
modules. How doul one configure that we would like to have storage
proximity for a VM? This is where things start to get very interesting and
enable the cross service scheduling (which is the goal of this no?).

Thanks
Gary

>
>Why would we need/want to deprecate? I say that if we get the code
>ectomied and working before nova feature freeze, that we elevate the new
>nova repo and delete the code from nova. Process for process sake here
>I'm not sure gets us anywhere.
>
>Monty
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=nlm44OzEwIEFTzGCfk6Dx
>1Lc0g7KHrY0h78JGykLd0s%3D%0A&s=e39dbfa0d3ca06da0fe8785a05a337a7c046c1634b3
>7b24f9822e686596e4265


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:53 AM, Gary Kotton wrote:
> 
> 
> On 12/2/13 5:39 PM, "Russell Bryant"  wrote:
> 
>> On 12/02/2013 10:33 AM, Monty Taylor wrote:
>>> Just because I'd like to argue - if what we do here is an actual
>>> forklift, do we really need a cycle of deprecation?
>>>
>>> The reason I ask is that this is, on first stab, not intended to be a
>>> service that has user-facing API differences. It's a reorganization of
>>> code from one repo into a different one. It's very strongly designed to
>>> not be different. It's not even adding a new service like conductor was
>>> - it's simply moving the repo where the existing service is held.
>>>
>>> Why would we need/want to deprecate? I say that if we get the code
>>> ectomied and working before nova feature freeze, that we elevate the new
>>> nova repo and delete the code from nova. Process for process sake here
>>> I'm not sure gets us anywhere.
>>
>> That makes sense to me, actually.
>>
>> I suppose part of the issue is that we're not positive how much work
>> will happen to the code *after* the forklift.  Will we have other
>> services integrated?  Will it have its own database?  How different is
>> different enough to warrant needing a deprecation cycle?
> 
> I have concerns with the forklift. There are at least 1 or 2 changes a
> week to the scheduling code (and the is is not taking into account new
> features being added). Will these need to be updated in 2 separate code
> bases? How do we ensure that both are in sync for the interim period. I am
> really sorry for playing devils advocate but I really think that there are
> too many issues and we have yet to iron them out. This should not prevent
> us from doing it but lets at least be aware of what is waiting ahead.

This is one of the reasons that I think the forklift is a *good* idea.
It's what will enable us to do it as fast as possible and minimize the
time we're dealing with 2 code bases.  It could be just 1 deprecation
cycle, or just a matter of a few weeks if we settle on what Monty is
suggesting.

What we *don't* want is something like Neutron and nova-network, where
we end up maintaining two implementations of a thing for a long, long time.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Christian Schwede
Am 02.12.13 15:47, schrieb Gregory Holt:
> Achieving this transparently is part of the ongoing plans, starting
> with things like the DiskFile refactoring and SSync. The idea is to
> isolate the direct disk access from other servers/tools, something
> that (for instance) RSync has today. Once the isolation is there, it
> should be fairly straightforward to have incoming requests for a
> ring^20 partition look on the local disk in a directory structure
> that was originally created for a ring^19 partition, or even vice
> versa. Then, there will be no need to move data around just for a
> ring-doubling or halving, and no down time to do so.

That sounds great! Is someone already working on this (I know about the
ongoing DiskFile refactoring) or even a blueprint available? I was aware
of the idea about multiple rings on the same policy but not about
support for rings with a modified partition power.

> That said, if you want create a tool that allows such ring shifting
> in the interim, it should work with smaller clusters that don't mind
> downtime. I would prefer that it not become a core tool checked
> directly into swift/python-swiftclient, just because of the plans
> stated above that should one day make it obsolete.

Yes, that makes a lot of sense. In fact the tool is already working; I
think the best way is to enhance the docs and to list it as a related
Swift project once I'm done with this.

Christian



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-02 Thread Julien Danjou
On Mon, Nov 18 2013, Julien Danjou wrote:

>   https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg

So I've gone through the code and started to write a plan on how I'd do
things:

  https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg

I don't think I missed too much, though I didn't run into all tiny
details.

Please feel free to tell me if I miss anything obvious, otherwise I'll
try to start submitting patches, one at a time, to get this into shape
step by step.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Gary Kotton


On 12/2/13 5:39 PM, "Russell Bryant"  wrote:

>On 12/02/2013 10:33 AM, Monty Taylor wrote:
>> Just because I'd like to argue - if what we do here is an actual
>> forklift, do we really need a cycle of deprecation?
>> 
>> The reason I ask is that this is, on first stab, not intended to be a
>> service that has user-facing API differences. It's a reorganization of
>> code from one repo into a different one. It's very strongly designed to
>> not be different. It's not even adding a new service like conductor was
>> - it's simply moving the repo where the existing service is held.
>> 
>> Why would we need/want to deprecate? I say that if we get the code
>> ectomied and working before nova feature freeze, that we elevate the new
>> nova repo and delete the code from nova. Process for process sake here
>> I'm not sure gets us anywhere.
>
>That makes sense to me, actually.
>
>I suppose part of the issue is that we're not positive how much work
>will happen to the code *after* the forklift.  Will we have other
>services integrated?  Will it have its own database?  How different is
>different enough to warrant needing a deprecation cycle?

I have concerns with the forklift. There are at least 1 or 2 changes a
week to the scheduling code (and the is is not taking into account new
features being added). Will these need to be updated in 2 separate code
bases? How do we ensure that both are in sync for the interim period. I am
really sorry for playing devils advocate but I really think that there are
too many issues and we have yet to iron them out. This should not prevent
us from doing it but lets at least be aware of what is waiting ahead.

>
>-- 
>Russell Bryant
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=D6xsP98tUQhdMCSQYclCv
>Bu6a28phHWronAJfm0sZwE%3D%0A&s=d306716b35d97764ffdf5a1b54427a6167c32ae1cef
>fc3b41a7525e684d48d2d


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation / Graduation / New program requirements

2013-12-02 Thread Thierry Carrez
Hi!

The TC has been working to formalize the set of the criteria it applies
when considering project incubation and graduation to integrated status,
and when considering new programs. The goal is to be more predictable
and that candidate projects and teams know what is expected of them
before they can apply for a given specific status.

This is still very much work in progress, but the draft documents have
been posted here for further review and comments:

https://review.openstack.org/#/c/59454/

Feel free to comment on the thread and/or on the review itself.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Maintaining backwards compatibility for RPC calls

2013-12-02 Thread Russell Bryant
tl;dr - live upgrades are hard.  oops.

On 11/27/2013 07:38 AM, Day, Phil wrote:
> I’m a bit confused about the expectations of a manager class to be able
> to receive and process messages from a previous RPC version.   I thought
> the objective was to always make changes such that the manage can
> process any previous version of the call  that could come from the last
> release,  For example Icehouse code should be able to receive any
> version that could be generated by a version of Havana.   Generally of
> course that means new parameters have to have a default value.
> 
> I’m kind of struggling then to see why we’ve now removed, for example,
> the default values for example from terminate_instance() as part of
> moving to RPC version to 3.0:
> 
> def terminate_instance(self, context, instance, bdms=None,
> reservations=None):
> 
> def terminate_instance(self, context, instance, bdms, reservations):
> 
> https://review.openstack.org/#/c/54493/
> 
> Doesn’t this mean that you can’t deploy Icehouse (3.0) code into a
> Havana system but leave the RPC version pinned at Havana until all of
> the code has been updated ? 

Thanks for bringing this up.  We realized a problem with the way I had
done these patches after some of them had merged.

First, some history.  The first time we did some major rpc version
bumps, they were done exactly like I did them here. [1][2]

This approach allows live upgrades for CD based deployments.  It does
*not* allow live upgrades from N-1 to N releases.  We didn't bother
because we knew there were other reasons that N-1 to N live upgrades
would not work at that point.

When I did this patch series, I took the same approach.  I didn't
account for the fact that we were going to try to pull off allowing live
upgrades from Havana to Icehouse.  The patches only supported live
upgrades in a CD environment.

I need to go back and add a shim layer that can handle receiving the
latest version of messages sent by Havana to all APIs.

[1] https://review.openstack.org/#/c/12130/
[2] https://review.openstack.org/#/c/12131/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-02 Thread David Kranz

On 12/02/2013 10:24 AM, Julien Danjou wrote:

On Fri, Nov 29 2013, David Kranz wrote:


In preparing to fail builds with log errors I have been trying to make
things easier for projects by maintaining a whitelist. But these bugs in
ceilometer are coming in so fast that I can't keep up. So I am  just putting
".*" in the white list for any cases I find before gate failing is turned
on, hopefully early this week.

Following the chat on IRC and the bug reports, it seems this might come
 From the tempest tests that are under reviews, as currently I don't
think Ceilometer generates any error as it's not tested.

Yeah, sorry about that.


So I'm not sure we want to whitelist anything?

I have pushed https://review.openstack.org/#/c/59443/ which will tell us.

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Monty Taylor


On 12/02/2013 10:39 AM, Russell Bryant wrote:
> On 12/02/2013 10:33 AM, Monty Taylor wrote:
>> Just because I'd like to argue - if what we do here is an actual
>> forklift, do we really need a cycle of deprecation?
>>
>> The reason I ask is that this is, on first stab, not intended to be a
>> service that has user-facing API differences. It's a reorganization of
>> code from one repo into a different one. It's very strongly designed to
>> not be different. It's not even adding a new service like conductor was
>> - it's simply moving the repo where the existing service is held.
>>
>> Why would we need/want to deprecate? I say that if we get the code
>> ectomied and working before nova feature freeze, that we elevate the new
>> nova repo and delete the code from nova. Process for process sake here
>> I'm not sure gets us anywhere.
> 
> That makes sense to me, actually.
> 
> I suppose part of the issue is that we're not positive how much work
> will happen to the code *after* the forklift.  Will we have other
> services integrated?  Will it have its own database?  How different is
> different enough to warrant needing a deprecation cycle?

I think those are excellent questions. I'd hope that at this point, what
we'd do is make sure that new scheduler can be CD-upgraded from old
scheduler with no downtime.

The own database is an interesting question - and probably the trickiest
from a no-downtime upgrade path. But if we can't figure out the no-pain
way, then putting in a deprecation cycle is just delaying the pain.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-02 Thread Daniel P. Berrange
On Tue, Nov 19, 2013 at 12:15:51PM +, Daniel P. Berrange wrote:
> For attention of maintainers of Nova virt drivers

Anyone from Hyper-V or VMWare drivers wish to comment on this
proposal


> A while back there was a bug requesting the ability to set the CPU
> topology (sockets/cores/threads) for guests explicitly
> 
>https://bugs.launchpad.net/nova/+bug/1199019
> 
> I countered that setting explicit topology doesn't play well with
> booting images with a variety of flavours with differing vCPU counts.
> 
> This led to the following change which used an image property to
> express maximum constraints on CPU topology (max-sockets/max-cores/
> max-threads) which the libvirt driver will use to figure out the
> actual topology (sockets/cores/threads)
> 
>   https://review.openstack.org/#/c/56510/
> 
> I believe this is a prime example of something we must co-ordinate
> across virt drivers to maximise happiness of our users.
> 
> There's a blueprint but I find the description rather hard to
> follow
> 
>   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
> 
> So I've created a standalone wiki page which I hope describes the
> idea more clearly
> 
>   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
> 
> Launchpad doesn't let me link the URL to the blueprint since I'm not
> the blurprint creator :-(
> 
> Anyway this mail is to solicit input on the proposed standard way to
> express this which is hypervisor portable and the addition of some
> shared code for doing the calculations which virt driver impls can
> just all into rather than re-inventing
> 
> I'm looking for buy-in to the idea from the maintainers of each
> virt driver that this conceptual approach works for them, before
> we go merging anything with the specific impl for libvirt.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:33 AM, Monty Taylor wrote:
> Just because I'd like to argue - if what we do here is an actual
> forklift, do we really need a cycle of deprecation?
> 
> The reason I ask is that this is, on first stab, not intended to be a
> service that has user-facing API differences. It's a reorganization of
> code from one repo into a different one. It's very strongly designed to
> not be different. It's not even adding a new service like conductor was
> - it's simply moving the repo where the existing service is held.
> 
> Why would we need/want to deprecate? I say that if we get the code
> ectomied and working before nova feature freeze, that we elevate the new
> nova repo and delete the code from nova. Process for process sake here
> I'm not sure gets us anywhere.

That makes sense to me, actually.

I suppose part of the issue is that we're not positive how much work
will happen to the code *after* the forklift.  Will we have other
services integrated?  Will it have its own database?  How different is
different enough to warrant needing a deprecation cycle?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][qa] log errors

2013-12-02 Thread David Kranz
So the last two bugs I filed were coming from check builds that were not 
merged. It was necessary to look at all builds to get to the point where 
gate failing on errors could be turned on. Here is the current 
whitelist. If you folks think the ".*' entries are no longer needed then 
please let me know and I will delete them.


 -David

ceilometer-acompute:
- module: "ceilometer.compute.pollsters.disk"
  message: "Unable to read from monitor: Connection reset by peer"
- module: "ceilometer.compute.pollsters.disk"
  message: "Requested operation is not valid: domain is not running"
- module: "ceilometer.compute.pollsters.net"
  message: "Requested operation is not valid: domain is not running"
- module: "ceilometer.compute.pollsters.disk"
  message: "Domain not found: no domain with matching uuid"
- module: "ceilometer.compute.pollsters.net"
  message: "Domain not found: no domain with matching uuid"
- module: "ceilometer.compute.pollsters.net"
  message: "No module named libvirt"
- module: "ceilometer.compute.pollsters.net"
  message: "Unable to write to monitor: Broken pipe"
- module: "ceilometer.compute.pollsters.cpu"
  message: "Domain not found: no domain with matching uuid"
- module: "ceilometer.compute.pollsters.net"
  message: ".*"
- module: "ceilometer.compute.pollsters.disk"
  message: ".*"

ceilometer-alarm-evaluator:
- module: "ceilometer.alarm.service"
  message: "alarm evaluation cycle failed"
- module: "ceilometer.alarm.evaluator.threshold"
  message: ".*"

ceilometer-api:
- module: "wsme.api"
  message: ".*"


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Monty Taylor


On 12/02/2013 09:13 AM, Russell Bryant wrote:
> On 11/29/2013 10:01 AM, Thierry Carrez wrote:
>> Robert Collins wrote:
>>> https://etherpad.openstack.org/p/icehouse-external-scheduler
>>
>> Just looked into it with release management / TC hat on and I have a
>> (possibly minor) concern on the deprecation path/timing.
>>
>> Assuming everything goes well, the separate scheduler will be
>> fast-tracked through incubation in I, graduate at the end of the I cycle
>> to be made a fully-integrated project in the J release.
>>
>> Your deprecation path description mentions that the internal scheduler
>> will be deprecated in I, although there is no "released" (or
>> security-supported) alternative to switch to at that point. It's not
>> until the J release that such an alternative will be made available.
>>
>> So IMHO for the release/security-oriented users, the switch point is
>> when they start upgrading to J, and not the final step of their upgrade
>> to I (as suggested by the "deploy the external scheduler and switch over
>> before you consider your migration to I complete" wording in the
>> Etherpad). As the first step towards *switching to J* you would install
>> the new scheduler before upgrading Nova itself. That works whether
>> you're a CD user (and start deploying pre-J stuff just after the I
>> release), or a release user (and wait until J final release to switch to
>> it).
>>
>> Maybe we are talking about the same thing (the migration to the separate
>> scheduler must happen after the I release and, at the latest, when you
>> switch to the J release) -- but I wanted to make sure we were on the
>> same page.
> 
> Sounds good to me.
> 
>> I also assume that all the other "scheduler-consuming" projects would
>> develop the capability to talk to the external scheduler during the J
>> cycle, so that their own schedulers would be deprecated in J release and
>> removed at the start of H. That would be, to me, the condition to
>> considering the external scheduler as "integrated" with (even if not
>> mandatory for) the rest of the common release components.
>>
>> Does that work for you ?
> 
> I would change "all the other" to "at least one other" here.  I think
> once we prove that a second project can be integrated into it, the
> project is ready to be integrated.  Adding support for even more
> projects is something that will continue to happen over a longer period
> of time, I suspect, especially since new projects are coming in every cycle.

Just because I'd like to argue - if what we do here is an actual
forklift, do we really need a cycle of deprecation?

The reason I ask is that this is, on first stab, not intended to be a
service that has user-facing API differences. It's a reorganization of
code from one repo into a different one. It's very strongly designed to
not be different. It's not even adding a new service like conductor was
- it's simply moving the repo where the existing service is held.

Why would we need/want to deprecate? I say that if we get the code
ectomied and working before nova feature freeze, that we elevate the new
nova repo and delete the code from nova. Process for process sake here
I'm not sure gets us anywhere.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 10:05 AM, Joe Gordon  wrote:

>
>
>
> On Mon, Dec 2, 2013 at 6:37 AM, Doug Hellmann  > wrote:
>
>>
>>
>>
>> On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon  wrote:
>>
>>>
>>>
>>>
>>> On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant wrote:
>>>
 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 >
 >
 >
 > On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
 > mailto:doug.hellm...@dreamhost.com>>
 wrote:
 >
 >
 >
 >
 > On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant <
 rbry...@redhat.com
 > > wrote:
 >
 > On 11/29/2013 01:39 PM, Doug Hellmann wrote:
 > > We have a review up (
 https://review.openstack.org/#/c/58297/)
 > to add
 > > some features to the notification system in the oslo
 > incubator. THe
 > > notification system is being moved into oslo.messaging, and
 so
 > we have
 > > the question of whether to accept the patch to the incubated
 > version,
 > > move it to oslo.messaging, or carry it in both.
 > >
 > > As I say in the review, from a practical standpoint I think
 we
 > can't
 > > really support continued development in both places. Given
 the
 > number of
 > > times the topic of "just make everything a library" has come
 > up, I would
 > > prefer that we focus our energy on completing the transition
 > for a given
 > > module or library once it the process starts. We also need
 to
 > avoid
 > > feature drift, and provide a clear incentive for projects to
 > update to
 > > the new library.
 > >
 > > Based on that, I would like to say that we do not add new
 > features to
 > > incubated code after it starts moving into a library, and
 only
 > provide
 > > "stable-like" bug fix support until integrated projects are
 > moved over
 > > to the graduated library (although even that is up for
 > discussion).
 > > After all integrated projects that use the code are using
 the
 > library
 > > instead of the incubator, we can delete the module(s) from
 the
 > incubator.
 > >
 > > Before we make this policy official, I want to solicit
 > feedback from the
 > > rest of the community and the Oslo core team.
 >
 > +1 in general.
 >
 > You may want to make "after it starts moving into a library"
 more
 > specific, though.
 >
 >
 > I think my word choice is probably what threw Sandy off, too.
 >
 > How about "after it has been moved into a library with at least a
 > release candidate published"?

 Sure, that's better.  That gives a specific bit of criteria for when the
 switch is flipped.

 >
 >
 >  One approach could be to reflect this status in the
 > MAINTAINERS file.  Right now there is a status field for each
 > module in
 > the incubator:
 >
 >
 >  S: Status, one of the following:
 >   Maintained:  Has an active maintainer
 >   Orphan:  No current maintainer, feel free to step
 up!
 >   Obsolete:Replaced by newer code, or a dead end, or
 > out-dated
 >
 > It seems that the types of code we're talking about should
 just be
 > marked as Obsolete.  Obsolete code should only get stable-like
 > bug fixes.
 >
 > That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
 > listed as Maintained).  I think that is accurate, though.
 >
 >
 > Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
-> Status: Graduating (short term)
-> Code removed from oslo-incubator once library is released

 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
-> Status: Obsolete (once an RC of a replacement lib has been
 released)
-> Code removed from oslo-incubator once all integrated projects have
 been migrated off of the obsolete code


 Does that match your view?

 >
 > I also added a "Graduating" status as an indicator for code in that
 > intermediate phase where there are 2 copies to be maintained. I hope
 we
 > don't have to use it very often, but it's best to be explicit.
 >
 > https://review.openstack.org/#

[openstack-dev] [all-projects] PLEASE READ: Code to fail builds on new log errors is merging

2013-12-02 Thread David Kranz
As mentioned several times on this list, 
https://review.openstack.org/#/c/58848/ is in the process of merging. 
Once it does, builds will start failing if there are log ERRORs that are 
not filtered out by 
https://github.com/openstack/tempest/blob/master/etc/whitelist.yaml. 
There are too many issues with neutron so, for the moment, neutron will 
not fail in this case.


If a build fails due to log errors, which will be shown in the console 
right after the tempest output, and you really think the log error is 
not a bug, you can propose a change to the whitelist. It will be great 
to also see the whitelist reduced as bugs are fixed.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-02 Thread Julien Danjou
On Fri, Nov 29 2013, David Kranz wrote:

> In preparing to fail builds with log errors I have been trying to make
> things easier for projects by maintaining a whitelist. But these bugs in
> ceilometer are coming in so fast that I can't keep up. So I am  just putting
> ".*" in the white list for any cases I find before gate failing is turned
> on, hopefully early this week.

Following the chat on IRC and the bug reports, it seems this might come
From the tempest tests that are under reviews, as currently I don't
think Ceilometer generates any error as it's not tested.

So I'm not sure we want to whitelist anything?

The tricky part is going to be for us to fix Ceilometer on one side and
re-run Tempest reviews on the other side once a potential fix is merged.

Ah… I wish we would have the functional test in the some repository as
Ceilometer, that'd simplify that a lot. Next cycle maybe. :-)

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >