Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Sergey Kraynev
On 8 July 2014 01:25, Zane Bitter  wrote:

> With the Icehouse release we announced that there would be no further
> backwards-incompatible changes to HOT without a revision bump. However, I
> notice that we've already made an upward-incompatible change in Juno:
>
> https://review.openstack.org/#/c/102718/
>
> So a user will be able to create a valid template for a Juno (or later)
> version of Heat with the version
>
>   heat_template_version: 2013-05-23
>
> but the same template may break on an Icehouse installation of Heat with
> the "stable" HOT parser. IMO this is almost equally as bad as breaking
> backwards compatibility, since a user moving between clouds will generally
> have no idea whether they are going forward or backward in version terms.
>
> (Note: AWS don't use the version field this way, because there is only one
> AWS and therefore in theory they don't have this problem. This implies that
> we might need a more sophisticated versioning system.)
>
> I'd like to propose a policy that we bump the revision of HOT whenever we
> make a change from the previous stable version, and that we declare the new
> version stable at the end of each release cycle. Maybe we can post-date it
> to indicate the policy more clearly. (I'd also like to propose that the
> Juno version drops cfn-style function support.)
>
> +1 for idea creating new version for each release cycle. I think it will
help to modify format and additional features easier without backward
compatibility problems.

+1 for rejecting cfn functions in HOT. Sometimes it leads to confusing
situation (F.e. when user try to use Fn::GetAtt instead of get_attr in HOT,
where we have not such function).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-07 Thread 韦远科
I once used update-initramfs under ubuntu, building ramdisk to boot from
remote iscsi disk.


-
韦远科
3479
中国科学院 计算机网络信息中心



On Tue, Jul 8, 2014 at 12:15 PM, Adam Young  wrote:

>  On 07/07/2014 01:16 PM, Victor Lowther wrote:
>
> As one of the original authors of dracut, I would love to see it being
> used to build initramfs images for TripleO. dracut is flexible, works
> across a wide variety of distros, and removes the need to have
> special-purpose toolchains and packages for use by the initramfs.
>
>  Dracut rocks, and we can use it to get support for Shared nothing
> diskless boot;
>
> http://adam.younglogic.com/2012/03/shared-nothing-diskless-boot/
>
>
>
>
>
> On Thu, Jul 3, 2014 at 10:12 PM, Ben Nemec  wrote:
>
>> I've recently been looking into using dracut to build the
>> deploy-ramdisks that we use for TripleO.  There are a few reasons for
>> this: 1) dracut is a fairly standard way to generate a ramdisk, so users
>> are more likely to know how to debug problems with it.  2) If we build
>> with dracut, we get a lot of the udev/net/etc stuff that we're currently
>> doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
>> doesn't include busybox, so we can't currently build ramdisks on that
>> distribution using the existing ramdisk element.
>>
>> For the RHEL issue, this could just be an alternate way to build
>> ramdisks, but given some of the other benefits I mentioned above I
>> wonder if it would make sense to look at completely replacing the
>> existing element.  From my investigation thus far, I think dracut can
>> accommodate all of the functionality in the existing ramdisk element,
>> and it looks to be available on all of our supported distros.
>>
>> So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
>>  Thanks.
>>
>> https://dracut.wiki.kernel.org/index.php/Main_Page
>>
>> -Ben
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] request to review bug 1301359

2014-07-07 Thread Radomir Dopieralski
On 08/07/14 08:29, Harshada Kakad wrote:>

> HI All,
> Could someone please, review the bug 
[...]

Hello Harshada,

please don't send such e-mails or ask in the IRC, submitting the patch
is enough to get reviewers to read it, eventually. If we sent such an
e-mail for every patch, this mailing list would become useless quickly.

Please note, however, that the patch queue for Horizon is quite long and
we only have so many people doing reviews, so it can take a long time.
If you would like to speed up the whole process, please help us by
reviewing some patches yourself. That will help us review faster and get
to your patch sooner.

Thank you,
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Commit messages and lawyer speak

2014-07-07 Thread Monty Taylor
On 07/07/2014 08:18 PM, Anne Gentle wrote:
> Hi John,
> There's a thread started on the legal-discuss list:
> http://lists.openstack.org/pipermail/legal-discuss/2014-July/000304.html
> Probably want to follow along there.

Hi!

a) I agree with Anne - so I have responded there.

b) I agree with Robert - so, "Oh hell no"

To sum up, more bluntly than I said there:

There is no way to match CCLA to employee list automatically, which
means that the burden to verify the statement would be on reviewers.
This is not ok.

Also, CLA's are pointless.

Also ...

Oh, HELL no.

> 
> On Mon, Jul 7, 2014 at 10:13 PM, John Griffith 
> wrote:
> 
>> Hey All,
>>
>> Just wondering what's up with the following items showing up in commit
>> messages:
>>
>> "CCLA SCHEDULE B SUBMISSION"
>>
>> Don't know that I care, but it seems completely unnecessary as signing the
>> Corporate CCLA means your submissions are of course covered under this
>> clause (at least I would think).  Is there any reason to have this in the
>> commit message?  Or better yet, the real question is; any reason not to as
>> corporate lawyers strike again and require it for their employees?
>>
>> Thanks,
>> John
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] request to review bug 1301359

2014-07-07 Thread Harshada Kakad
HI All,
Could someone please, review the bug
https://bugs.launchpad.net/horizon/+bug/1301359


Made Size parameter optional while creating DB Instance.

While creating Database Instance size parameter depends on
whether trove_volume_support is set. So size paramater is
set to mandatory if trove_volume_support is set else its kept optional.

Here is the link for review :  https://review.openstack.org/#/c/86295/

-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com *
*website : www.izeltech.com *

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-07 Thread Kevin Benton
I think at this point the discussion is mostly contained in the review for
the spec[1] so I don't see a particular need to continue the IRC meeting.


1. https://review.openstack.org/#/c/88599/


On Mon, Jul 7, 2014 at 11:12 PM, Collins, Sean <
sean_colli...@cable.comcast.com> wrote:

> On Mon, Jul 07, 2014 at 11:01:52PM PDT, Kevin Benton wrote:
> > I can lead it, but I'm not sure if there is anything new to discuss since
> > the QoS spec is still under review.
> > Did you have any specific agenda items that you wanted to cover?
>
> Ah. The QoS IRC meeting will also need to be chaired in my absence,
> although lately there has not been a lot of participation. This is
> partly my fault since I didn't start the meeting on time last week, but
> I wonder if we should continue the IRC meeting?
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-07 Thread Collins, Sean
On Mon, Jul 07, 2014 at 11:01:52PM PDT, Kevin Benton wrote:
> I can lead it, but I'm not sure if there is anything new to discuss since
> the QoS spec is still under review.
> Did you have any specific agenda items that you wanted to cover?

Ah. The QoS IRC meeting will also need to be chaired in my absence,
although lately there has not been a lot of participation. This is
partly my fault since I didn't start the meeting on time last week, but
I wonder if we should continue the IRC meeting?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-07 Thread Kevin Benton
I can lead it, but I'm not sure if there is anything new to discuss since
the QoS spec is still under review.
Did you have any specific agenda items that you wanted to cover?


On Mon, Jul 7, 2014 at 1:43 PM, Collins, Sean <
sean_colli...@cable.comcast.com> wrote:

> Hi,
>
> I am currently at a book sprint and will probably not be able to run the
> meeting, if someone could volunteer to chair the meeting and run it,
> that would be great.
>
> Any takers?
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-07 Thread Philipp Marek

Hello Doug,


thank you for your help.

> > I guess the problem is that the subdirectory within that tarball includes
> > the version number, as in "dbus-python-0.84.0/". How can I tell the extract
> > script that it should look into that one?
> 
> It looks like that package wasn't built correctly as an sdist, so pip
> won't install it. Have you contacted the author to report the problem
> as a bug?
No, not yet.

I thought that it was okay, being hosted on python.org and so on.

The other tarballs in that hierarchy follow the same schema; perhaps the 
cached download is broken?


The most current releases are available on
http://dbus.freedesktop.org/releases/dbus-python/
though; perhaps the 1.2.0 release works better?

But how could I specify to use _that_ source URL?


Thank you!


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Mark Kirkwood

On 08/07/14 17:08, Denis Makogon wrote:

Mark, there are also no documentation about service tuning(no description
of service related options, sample configs in Trove repo is not enough).
So, I think we should extend your list of significant things to document.


...and in case it might be helpful: here's my notes for installing 
openstack/trove on Ubuntu 14.04 using the Ubuntu packages, and debugging 
it (ahem). Some of the issues (virtio on bridges) are caused by it being 
all on one node, but hopefully it is generally useful (I cover how to 
build the guest image and get backups etc going). I make no claim for it 
being the best/only way to force the beast into being :-) but I think it 
works (for the logic devotees - sufficient not maybe not necessary)!


Regards

Mark



README-OPENSTACK.gz
Description: application/gzip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Mark Kirkwood

On 08/07/14 17:08, Denis Makogon wrote:

Mark, there are also no documentation about service tuning(no description
of service related options, sample configs in Trove repo is not enough).
So, I think we should extend your list of significant things to document.


Right - I guess most of the tuning/config parameters could be better 
documented too (I do recall seeing this mentioned for one of the Trove 
meetings).


One other thing I recall is:

- mysql security install/setup in guest (mysql root password).

I had to struggle through all of these - and it took a lot of time, 
because essentially the only viable way to debug each issue was:


- check in an equivalent devstack build
- read devstack setup code

or (if the issue was present in devstack as well)

- read the trove code and insert debug logging as appropriate

...and while this was a very interesting exercise, it was not a fast one!

Cheers

Mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
Mark, there are also no documentation about service tuning(no description
of service related options, sample configs in Trove repo is not enough).
So, I think we should extend your list of significant things to document.

Thanks,
Denis M.

вторник, 8 июля 2014 г. пользователь Mark Kirkwood написал:

> On 08/07/14 00:40, Amrith Kumar wrote:
>
>
>>
>> I think it is totally ludicrous (and to all the technical writers who
>> work on OpenStack, downright offensive) to say the “docs are useless”. Not
>> only have I been able to install and successfully operate a OpenStack
>> installation by (largely) following the documentation, but
>> “trove-integration” and “redstack” are useful for developers but I would
>> highly doubt that a production deployment of Trove would use ‘redstack’.
>>
>>
>>
>> Syed, maybe you need to download a guest image for Trove, or maybe there
>> is something else amiss with your setup. Happy to catch up with you on IRC
>> and help you with that. Optionally, email me and I’ll give you a hand.
>>
>>
>>
>>
> It is a bit harsh, to be sure. However critical areas are light/thin or
> not covered at all - and this is bound to generate a bit of frustration for
> folk wanting to use this feature.
>
> In particular:
>
> - guest image preparation
> - guest file injection (/etc/guest_info) nova interaction
> - dns requirements for guest image (self hostname resolv)
> - swift backup config authorization
> - api_extensions_path setting and how critical that is
>
> There are probably more that I have forgotten (repressed perhaps...)!
>
> Regards
>
> Mark
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [VMware] Can someone help to look at this bug https://bugs.launchpad.net/nova/+bug/1338881

2014-07-07 Thread Jian Hua Geng

Hi All,

Can someone help to look at this bug that is regarding the non-admin user
connect to vCenter when run nova compute services?


--
Best regard,
David Geng
--___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler group meeting agenda 7/8

2014-07-07 Thread Dugger, Donald D
1) Forklift (tasks & status)
2) Fair Share scheduler
3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-07 Thread Adam Young

On 07/07/2014 01:16 PM, Victor Lowther wrote:
As one of the original authors of dracut, I would love to see it being 
used to build initramfs images for TripleO. dracut is flexible, works 
across a wide variety of distros, and removes the need to have 
special-purpose toolchains and packages for use by the initramfs.


Dracut rocks, and we can use it to get support for Shared nothing 
diskless boot;


http://adam.younglogic.com/2012/03/shared-nothing-diskless-boot/





On Thu, Jul 3, 2014 at 10:12 PM, Ben Nemec > wrote:


I've recently been looking into using dracut to build the
deploy-ramdisks that we use for TripleO.  There are a few reasons for
this: 1) dracut is a fairly standard way to generate a ramdisk, so
users
are more likely to know how to debug problems with it.  2) If we build
with dracut, we get a lot of the udev/net/etc stuff that we're
currently
doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
doesn't include busybox, so we can't currently build ramdisks on that
distribution using the existing ramdisk element.

For the RHEL issue, this could just be an alternate way to build
ramdisks, but given some of the other benefits I mentioned above I
wonder if it would make sense to look at completely replacing the
existing element.  From my investigation thus far, I think dracut can
accommodate all of the functionality in the existing ramdisk element,
and it looks to be available on all of our supported distros.

So that's my pitch in favor of using dracut for ramdisks.  Any
thoughts?
 Thanks.

https://dracut.wiki.kernel.org/index.php/Main_Page

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-07 Thread Yi Sun

Vivek,
I will try to join the DVR meeting. Since it conflicts with one of my 
other meeting (from my real job), I may join late or may not be able to 
join at all. If I missed it, please see if you can join FWaaS meeting at 
Wed 11:30AM PST  on openstack-meeting-3. Otherwise, a separated meeting 
is still preferred


Thanks
Yi

On 7/4/14, 12:23 AM, Narasimhan, Vivekanandan wrote:


Hi Yi,

Swami will be available from this week.

Will it be possible for you to join the regular DVR Meeting (Wed 8AM 
PST) next week and we can slot that to discuss this.


I see that FwaaS is of much value for E/W traffic (which has 
challenges), but for me it looks easier to implement the same in N/S 
with the


current DVR architecture, but there might be less takers on that.

--

Thanks,

Vivek

*From:*Yi Sun [mailto:beyo...@gmail.com]
*Sent:* Thursday, July 03, 2014 11:50 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] DVR and FWaaS integration

The NS FW will be on a centralized node for sure. For the DVR + FWaaS 
solution is really for EW traffic. If you are interested on the topic, 
please propose your preferred meeting time and join the meeting so 
that we can discuss about it.


Yi

On 7/2/14, 7:05 PM, joehuang wrote:

Hello,

It's hard to integrate DVR and FWaaS. My proposal is to split the
FWaaS into two parts: one part is for east-west FWaaS, this part
could be done on DVR side, and make it become distributed manner.
The other part is for north-south part, this part could be done on
Network Node side, that means work in central manner. After the
split, north-south FWaaS could be implemented by software or
hardware, meanwhile, east-west FWaaS is better to implemented by
software with its distribution nature.

Chaoyi Huang ( Joe Huang )

OpenStack Solution Architect

IT Product Line

Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email:
joehu...@huawei.com 

Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129,
P.R.China

*???**:*Yi Sun [mailto:beyo...@gmail.com]
*:* 2014?7?3? 4:42
*???:* OpenStack Development Mailing List (not for usage questions)
*??:* Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack
Neutron)
*??:* Re: [openstack-dev] DVR and FWaaS integration

All,

After talk to Carl and FWaaS team , Both sides suggested to call a
meeting to discuss about this topic in deeper detail. I heard that
Swami is traveling this week. So I guess the earliest time we can
have a meeting is sometime next week. I will be out of town on
monday, so any day after Monday should work for me. We can do
either IRC, google hang out, GMT or even a face to face.

For anyone interested, please propose your preferred time.

Thanks

Yi

On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin mailto:c...@ecbaldwin.net>> wrote:

In line...

On Jun 25, 2014 2:02 PM, "Yi Sun" mailto:beyo...@gmail.com>> wrote:
>
> All,
> During last summit, we were talking about the integration issues
between DVR and FWaaS. After the summit, I had one IRC meeting
with DVR team. But after that meeting I was tight up with my work
and did not get time to continue to follow up the issue. To not
slow down the discussion, I'm forwarding out the email that I sent
out as the follow up to the IRC meeting here, so that whoever may
be interested on the topic can continue to discuss about it.
>
> First some background about the issue:
> In the normal case, FW and router are running together inside
the same box so that FW can get route and NAT information from the
router component. And in order to have FW to function correctly,
FW needs to see the both directions of the traffic.
> DVR is designed in an asymmetric way that each DVR only sees one
leg of the traffic. If we build FW on top of DVR, then FW
functionality will be broken. We need to find a good method to
have FW to work with DVR.
>
> ---forwarding email---
>  During the IRC meeting, we think that we could force the
traffic to the FW before DVR. Vivek had more detail; He thinks
that since the br-int knowns whether a packet is routed or
switched, it is possible for the br-int to forward traffic to FW
before it forwards to DVR. The whole forwarding process can be
operated as part of service-chain operation. And there could be a
FWaaS driver that understands the DVR configuration to setup OVS
flows on the br-int.

I'm not sure what this solution would look like. I'll have to get
the details from Vivek.  It seems like this would effectively
centralize the traffic that we worked so hard to decentralize.

It did cause me to wonder about something:  would it be possible
to reign the symmetry to the traffic by directing any response
traffic back to the DVR component which h

[openstack-dev] [Openstack][Nova] Launch of VM failed after certain count openstack Havana

2014-07-07 Thread Vikash Kumar
Hi all,

I am facing issue with VM launch. I am using openstack *Havana.* I have
one compute node with following specification:

root@compute-node:~# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:1
Core(s) per socket:4
Socket(s): 2
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 15
Stepping:  7
CPU MHz:   1995.104
BogoMIPS:  3990.02
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  4096K
NUMA node0 CPU(s): 0-7

root@compute-node:~# free -h
 total   used   free sharedbuffers cached
Mem:   15G   1.5G14G 0B   174M   531M
-/+ buffers/cache:   870M14G
Swap:  15G 0B15G


But I am not able to launch more than 12-14 VM. VM launch fails. Even I
don't see *ERROR *logs in any of nova logs on both *openstack controller
and compute node.* Don't see any request coming to compute node also. I
just tailed nova compute logs on compute node. As soon as, I clean the
previous VMs , everything works fine. I have never observed this with
Grizzly.


Regards,
Vikash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Mid-Cycle Meetup

2014-07-07 Thread Michael Still
Adrian, is there any news on this? I want to start booking my ops
meetup travel, but I don't know if I should include this meetup or
not.

Thanks,
Michael

On Wed, Jul 2, 2014 at 8:12 AM, Russell Bryant  wrote:
> On 07/01/2014 05:59 PM, Adrian Otto wrote:
>> Team,
>>
>> Please help us select dates for the Containers Team Midcycle Meetup:
>>
>> http://doodle.com/2mebqhdxpksf763m
>
> Why not just join the Nova meetup?
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread Osanai, Hisashi

John,

Thank you for your response.

I checked out the doc of swift-recon and that function is 
exactly the one what I want to have. 

# Sorry, my checking is not enough...

Thanks again,
Hisashi Osanai

> -Original Message-
> From: John Dickinson [mailto:m...@not.mn]
> Sent: Tuesday, July 08, 2014 11:59 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [swift] add checking daemons existence in
> Healthcheck middleware
> 
> In general, you're right. It's pretty important to know what's going on
> in the cluster. However, the checks for these background daemons shouldn't
> be done in the wsgi servers. Generally, we've stayed away from a lot of
> process monitoring in the Swift core. That it, Swift already works around
> failures, and there is already existing ops tooling to monitor if a process
> is alive.
> 
> Check out the swift-recon tool that's included with Swift. It already
> includes some checks like the replication cycle time. While it's not a
> direct "is this process alive" monitoring tool, it does give good
> information about the health of the cluster.
> 
> If you've got some other ideas on checks to add to recon or ways to make
> it better or perhaps even some different ways to integrate monitoring
> systems, let us know!
> 
> --John
> 
> 
> 
> On Jul 7, 2014, at 7:33 PM, Osanai, Hisashi
>  wrote:
> 
> >
> > Hi,
> >
> > Current Healthcheck middleware provides the functionality of monitoring
> Servers such as
> > Proxy Server, Object Server, Container Server, Container Server and
> Account Server. The
> > middleware checks whether each Servers can handle request/response.
> > My idea to enhance this middleware is
> > checking daemons such replications, updaters and auditors existence
> in addition to current one.
> > If we realize this, the scope of Health would be extended from
> > "a Server can handle request" to "a Server and daemons can work
> appropriately".
> >
> >
> http://docs.openstack.org/developer/swift/icehouse/middleware.html?h
> ighlight=health#healthcheck
> >
> > What do you think?
> >
> > Best Regards,
> > Hisashi Osanai
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-07 Thread Michael Still
Joe has a good answer, but you should also be aware of the hypervisor
support matrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix),
which hopefully comes some way to explaining what we expect of a nova
driver.

Cheers,
Michael

On Tue, Jul 8, 2014 at 9:11 AM, Joe Gordon  wrote:
>
> On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
>>
>> Hi, All!
>>
>> As far as I know, there are some requirements, which virt driver must meet
>> to
>> use Openstack 'label'. For example, it's not allowed to mount cinder
>> volumes
>> inside host OS.
>
> I am a little unclear on what your question is. If it is simply about the
> OpenStack label then:
>
> 'OpenStack' is a trademark that is enforced by the OpenStack foundation. You
> should check with the foundation to get a formal answer on commercial
> trademark usage. (As an OpenStack developer, my personal view is having out
> of tree drivers is a bad idea, but that decision isn't up to me.)
>
> If this is about contributing your driver to nova (great!), then this is the
> right forum to begin that discussion. We don't have a formal list of
> requirements for contributing new drivers to nova besides the need for CI
> testing. If you are interested in contributing a new nova driver, can you
> provide a brief overview along with your questions to get the discussion
> started.
>
> Also there is an existing efforts to add container support into nova and I
> hear they are making excellent progress; do you plan on collaborating with
> those folks?
>
>>
>> Are there any documents, describing all such things? How can I determine,
>> if
>> my virtualization driver for nova (developed outside of nova mainline)
>> works
>> correctly and meet nova's security requirements?
>>
>>
>> --
>> Dmitry Guryanov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Commit messages and lawyer speak

2014-07-07 Thread Anne Gentle
Hi John,
There's a thread started on the legal-discuss list:
http://lists.openstack.org/pipermail/legal-discuss/2014-July/000304.html
Probably want to follow along there.
Anne


On Mon, Jul 7, 2014 at 10:13 PM, John Griffith 
wrote:

> Hey All,
>
> Just wondering what's up with the following items showing up in commit
> messages:
>
> "CCLA SCHEDULE B SUBMISSION"
>
> Don't know that I care, but it seems completely unnecessary as signing the
> Corporate CCLA means your submissions are of course covered under this
> clause (at least I would think).  Is there any reason to have this in the
> commit message?  Or better yet, the real question is; any reason not to as
> corporate lawyers strike again and require it for their employees?
>
> Thanks,
> John
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Commit messages and lawyer speak

2014-07-07 Thread Robert Collins
"Oh hell no".

-Rob

On 8 July 2014 15:13, John Griffith  wrote:
> Hey All,
>
> Just wondering what's up with the following items showing up in commit
> messages:
>
> "CCLA SCHEDULE B SUBMISSION"
>
> Don't know that I care, but it seems completely unnecessary as signing the
> Corporate CCLA means your submissions are of course covered under this
> clause (at least I would think).  Is there any reason to have this in the
> commit message?  Or better yet, the real question is; any reason not to as
> corporate lawyers strike again and require it for their employees?
>
> Thanks,
> John
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Commit messages and lawyer speak

2014-07-07 Thread John Griffith
Hey All,

Just wondering what's up with the following items showing up in commit
messages:

"CCLA SCHEDULE B SUBMISSION"

Don't know that I care, but it seems completely unnecessary as signing the
Corporate CCLA means your submissions are of course covered under this
clause (at least I would think).  Is there any reason to have this in the
commit message?  Or better yet, the real question is; any reason not to as
corporate lawyers strike again and require it for their employees?

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread John Dickinson
In general, you're right. It's pretty important to know what's going on in the 
cluster. However, the checks for these background daemons shouldn't be done in 
the wsgi servers. Generally, we've stayed away from a lot of process monitoring 
in the Swift core. That it, Swift already works around failures, and there is 
already existing ops tooling to monitor if a process is alive.

Check out the swift-recon tool that's included with Swift. It already includes 
some checks like the replication cycle time. While it's not a direct "is this 
process alive" monitoring tool, it does give good information about the health 
of the cluster.

If you've got some other ideas on checks to add to recon or ways to make it 
better or perhaps even some different ways to integrate monitoring systems, let 
us know!

--John



On Jul 7, 2014, at 7:33 PM, Osanai, Hisashi  
wrote:

> 
> Hi,
> 
> Current Healthcheck middleware provides the functionality of monitoring 
> Servers such as 
> Proxy Server, Object Server, Container Server, Container Server and Account 
> Server. The 
> middleware checks whether each Servers can handle request/response. 
> My idea to enhance this middleware is 
> checking daemons such replications, updaters and auditors existence in 
> addition to current one. 
> If we realize this, the scope of Health would be extended from 
> "a Server can handle request" to "a Server and daemons can work 
> appropriately".
> 
> http://docs.openstack.org/developer/swift/icehouse/middleware.html?highlight=health#healthcheck
> 
> What do you think?
> 
> Best Regards,
> Hisashi Osanai
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Mark Kirkwood

On 08/07/14 00:40, Amrith Kumar wrote:




I think it is totally ludicrous (and to all the technical writers who work on 
OpenStack, downright offensive) to say the “docs are useless”. Not only have I 
been able to install and successfully operate a OpenStack installation by 
(largely) following the documentation, but “trove-integration” and “redstack” 
are useful for developers but I would highly doubt that a production deployment 
of Trove would use ‘redstack’.



Syed, maybe you need to download a guest image for Trove, or maybe there is 
something else amiss with your setup. Happy to catch up with you on IRC and 
help you with that. Optionally, email me and I’ll give you a hand.





It is a bit harsh, to be sure. However critical areas are light/thin or 
not covered at all - and this is bound to generate a bit of frustration 
for folk wanting to use this feature.


In particular:

- guest image preparation
- guest file injection (/etc/guest_info) nova interaction
- dns requirements for guest image (self hostname resolv)
- swift backup config authorization
- api_extensions_path setting and how critical that is

There are probably more that I have forgotten (repressed perhaps...)!

Regards

Mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-07 Thread Mike Lundy
Is it possible to tag a new release containing the fix for
https://bugs.launchpad.net/python-novaclient/+bug/1297796 ? The bug
can cause correct code to fail ~50% of the time (every connection
reuse fails with a BadStatusLine).

Thanks! <3

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread Osanai, Hisashi

Hi,

Current Healthcheck middleware provides the functionality of monitoring Servers 
such as 
Proxy Server, Object Server, Container Server, Container Server and Account 
Server. The 
middleware checks whether each Servers can handle request/response. 
My idea to enhance this middleware is 
checking daemons such replications, updaters and auditors existence in addition 
to current one. 
If we realize this, the scope of Health would be extended from 
 "a Server can handle request" to "a Server and daemons can work appropriately".

http://docs.openstack.org/developer/swift/icehouse/middleware.html?highlight=health#healthcheck

What do you think?

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] new nasty gate bug 1338844 with nova-network races

2014-07-07 Thread Matt Riedemann
I noticed the bug [1] today.  Given the trend in logstash, it might be 
related to some fixes proposed to try and resolve the other big nova ssh 
timeout bug 1298472.  It appears to only be in jobs using nova-network.


[1] https://bugs.launchpad.net/nova/+bug/1338844

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-07 Thread Brandon Logan
https://review.openstack.org/#/c/105331

It's a WIP and the shim layer still needs to be completed.  Its a lot of code, 
I know.  Please review it thoroughly and point out what needs to change.

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Steve Baker
On 08/07/14 10:13, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-07-07 14:25:50 -0700:
>> With the Icehouse release we announced that there would be no further 
>> backwards-incompatible changes to HOT without a revision bump. However, 
>> I notice that we've already made an upward-incompatible change in Juno:
>>
>> https://review.openstack.org/#/c/102718/
>>
>> So a user will be able to create a valid template for a Juno (or later) 
>> version of Heat with the version
>>
>>heat_template_version: 2013-05-23
>>
>> but the same template may break on an Icehouse installation of Heat with 
>> the "stable" HOT parser. IMO this is almost equally as bad as breaking 
>> backwards compatibility, since a user moving between clouds will 
>> generally have no idea whether they are going forward or backward in 
>> version terms.
> Sounds like a bug in Juno that we need to fix. I agree, this is a new
> template version.
>
>> (Note: AWS don't use the version field this way, because there is only 
>> one AWS and therefore in theory they don't have this problem. This 
>> implies that we might need a more sophisticated versioning system.)
>>
> A good manual with a "this was introduced in version X" and "this was
> changed in version Y" would, IMO be enough, to help users not go crazy
> and help us know whether something is a bug or not. We can probably
> achieve this entirely in the in-code template guide.
>
Intrinsic functions are manually documented, but it would be great to be
able to generate this for resource types, properties and attributes. The
SupportStatus structures are all there, it just takes someone to go
through the release history and populate them. Users often trip over
attempting to use a documented new thing on an older heat release.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Steve Baker
On 08/07/14 09:25, Zane Bitter wrote:
> With the Icehouse release we announced that there would be no further
> backwards-incompatible changes to HOT without a revision bump.
> However, I notice that we've already made an upward-incompatible
> change in Juno:
>
> https://review.openstack.org/#/c/102718/
>
> So a user will be able to create a valid template for a Juno (or
> later) version of Heat with the version
>
>   heat_template_version: 2013-05-23
>
> but the same template may break on an Icehouse installation of Heat
> with the "stable" HOT parser. IMO this is almost equally as bad as
> breaking backwards compatibility, since a user moving between clouds
> will generally have no idea whether they are going forward or backward
> in version terms.
>
> (Note: AWS don't use the version field this way, because there is only
> one AWS and therefore in theory they don't have this problem. This
> implies that we might need a more sophisticated versioning system.)
>
> I'd like to propose a policy that we bump the revision of HOT whenever
> we make a change from the previous stable version, and that we declare
> the new version stable at the end of each release cycle. Maybe we can
> post-date it to indicate the policy more clearly. (I'd also like to
> propose that the Juno version drops cfn-style function support.)
>
+1 on setting the juno release date as the latest heat_template_version,
and putting list_join in that.

It seems reasonable to remove cfn-style functions from latest
heat_template_version as long as they are still registered in 2013-05-23

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-07 Thread Joe Gordon
On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
>
> Hi, All!
>
> As far as I know, there are some requirements, which virt driver must
meet to
> use Openstack 'label'. For example, it's not allowed to mount cinder
volumes
> inside host OS.

I am a little unclear on what your question is. If it is simply about the
OpenStack label then:

'OpenStack' is a trademark that is enforced by the OpenStack foundation.
You should check with the foundation to get a formal answer on commercial
trademark usage. (As an OpenStack developer, my personal view is having out
of tree drivers is a bad idea, but that decision isn't up to me.)

If this is about contributing your driver to nova (great!), then this is
the right forum to begin that discussion. We don't have a formal list of
requirements for contributing new drivers to nova besides the need for CI
testing. If you are interested in contributing a new nova driver, can you
provide a brief overview along with your questions to get the discussion
started.

Also there is an existing efforts to add container support into nova and I
hear they are making excellent progress; do you plan on collaborating with
those folks?

>
> Are there any documents, describing all such things? How can I determine,
if
> my virtualization driver for nova (developed outside of nova mainline)
works
> correctly and meet nova's security requirements?
>
>
> --
> Dmitry Guryanov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy around Requirements Adds (was: New class of requirements for Stackforge projects)

2014-07-07 Thread Clark Boylan
On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon  wrote:
>
> On Jul 7, 2014 4:48 PM, "Sean Dague"  wrote:
>>
>> This thread was unfortunately hidden under a project specific tag (I
>> have thus stripped all the tags).
>>
>> The crux of the argument here is the following:
>>
>> Is a stackforge project project able to propose additions to
>> global-requirements.txt that aren't used by any projects in OpenStack.
>>
>> I believe the answer is firmly *no*.
>
> ++
>
>>
>> global-requirements.txt provides a way for us to have a single point of
>> vetting for requirements for OpenStack. It lets us assess licensing,
>> maturity, current state of packaging, python3 support, all in one place.
>> And it lets us enforce that integration of OpenStack projects all run
>> under a well understood set of requirements.
>>
>> The requirements sync that happens after requirements land is basically
>> just a nicety for getting openstack projects to the tested state by
>> eventual consistency.
>>
>> If a stackforge project wants to be limited by global-requirements,
>> that's cool. We have a mechanism for that. However, they are accepting
>> that they will be limited by it. That means they live with how the
>> OpenStack project establishes that list. It specifically means they
>> *don't* get to propose any new requirements.
>>
>> Basically in this case Solum wants to have it's cake and eat it to. Both
>> be enforced on requirements, and not be enforced. Or some 3rd thing that
>> means the same as that.
>>
>> The near term fix is to remove solum from projects.txt.
>
> The email included below mentions an additional motivation for using
> global-requirements is to avoid using pypi.python.org and instead use
> pypi.openstack.org for speed and reliability. Perhaps there is a way we can
> support use case for stackforge projects not in projects.txt? I thought I
> saw something the other day about adding a full pypi mirror to OpenStack
> infra.
>
This is done. All tests are now run against a bandersnatch built full
mirror of pypi. Enforcement of the global requirements is performed
via the enforcement jobs.
>>
>> On 06/26/2014 02:00 AM, Adrian Otto wrote:
>> > Ok,
>> >
>> > I submitted and abandoned a couple of reviews[1][2] for a solution aimed
>> > to meet my goals without adding a new per-project requirements file. The
>> > flaw with this approach is that pip may install other requirements when
>> > installing the one(s) loaded from the fallback mirror, and those may
>> > conflict with the ones loaded from the primary mirror.
>> >
>> > After discussing this further in #openstack-infra this evening, we
>> > should give serious consideration to adding python-mistralclient to
>> > global requirements. I have posted a review[3] for that to get input
>> > from the requirements review team.
>> >
>> > Thanks,
>> >
>> > Adrian
>> >
>> > [1] https://review.openstack.org/102716
>> > [2] https://review.openstack.org/102719
>> > [3] https://review.openstack.org/102738
>> > 
>> >
>> > On Jun 25, 2014, at 9:51 PM, Matthew Oliver > > > wrote:
>> >
>> >>
>> >> On Jun 26, 2014 12:12 PM, "Angus Salkeld" > >> > wrote:
>> >> >
>> > On 25/06/14 15:13, Clark Boylan wrote:
>> >> On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
>> >>> mailto:adrian.o...@rackspace.com>> wrote:
>> >>> Hello,
>> >>>
>> >>> Solum has run into a constraint with the current scheme for
>> >>> requirements management within the OpenStack CI system. We have a
>> >>> proposal for dealing with this constraint that involves making a
>> >>> contribution to openstack-infra. This message explains the constraint,
>> >>> and our proposal for addressing it.
>> >>>
>> >>> == Background ==
>> >>>
>> >>> OpenStack uses a list of global requirements in the requirements
>> >>> repo[1], and each project has it’s own requirements.txt and
>> >>> test-requirements.txt files. The requirements are satisfied by gate
>> >>> jobs using pip configured to use the pypi.openstack.org
>> >>>  mirror, which is periodically updated
>> >>> with new content from pypi.python.org . One
>> >>> motivation for doing this is that pypi.python.org
>> >>>  may not be as fast or as reliable as a local
>> >>> mirror. The gate/check jobs for the projects use the OpenStack
>> >>> internal pypi mirror to ensure stability.
>> >>>
>> >>> The OpenStack CI system will sync up the requirements across all
>> >>> the official projects and will create reviews in the participating
>> >>> projects for any mis-matches. Solum is one of these projects, and
>> >>> enjoys this feature.
>> >>>
>> >>> Another motivation is so that users of OpenStack will have one
>> >>> single set of python package requirements/dependencies to install and
>> >>> run the individual OpenStack components.
>> >>>
>> >>> == Problem ==
>> >>>
>> >>> Stackforge projects listed in openstack/requiremen

Re: [openstack-dev] Policy around Requirements Adds (was: New class of requirements for Stackforge projects)

2014-07-07 Thread Joe Gordon
On Jul 7, 2014 4:48 PM, "Sean Dague"  wrote:
>
> This thread was unfortunately hidden under a project specific tag (I
> have thus stripped all the tags).
>
> The crux of the argument here is the following:
>
> Is a stackforge project project able to propose additions to
> global-requirements.txt that aren't used by any projects in OpenStack.
>
> I believe the answer is firmly *no*.

++

>
> global-requirements.txt provides a way for us to have a single point of
> vetting for requirements for OpenStack. It lets us assess licensing,
> maturity, current state of packaging, python3 support, all in one place.
> And it lets us enforce that integration of OpenStack projects all run
> under a well understood set of requirements.
>
> The requirements sync that happens after requirements land is basically
> just a nicety for getting openstack projects to the tested state by
> eventual consistency.
>
> If a stackforge project wants to be limited by global-requirements,
> that's cool. We have a mechanism for that. However, they are accepting
> that they will be limited by it. That means they live with how the
> OpenStack project establishes that list. It specifically means they
> *don't* get to propose any new requirements.
>
> Basically in this case Solum wants to have it's cake and eat it to. Both
> be enforced on requirements, and not be enforced. Or some 3rd thing that
> means the same as that.
>
> The near term fix is to remove solum from projects.txt.

The email included below mentions an additional motivation for using
global-requirements is to avoid using pypi.python.org and instead use
pypi.openstack.org for speed and reliability. Perhaps there is a way we can
support use case for stackforge projects not in projects.txt? I thought I
saw something the other day about adding a full pypi mirror to OpenStack
infra.

>
> On 06/26/2014 02:00 AM, Adrian Otto wrote:
> > Ok,
> >
> > I submitted and abandoned a couple of reviews[1][2] for a solution aimed
> > to meet my goals without adding a new per-project requirements file. The
> > flaw with this approach is that pip may install other requirements when
> > installing the one(s) loaded from the fallback mirror, and those may
> > conflict with the ones loaded from the primary mirror.
> >
> > After discussing this further in #openstack-infra this evening, we
> > should give serious consideration to adding python-mistralclient to
> > global requirements. I have posted a review[3] for that to get input
> > from the requirements review team.
> >
> > Thanks,
> >
> > Adrian
> >
> > [1] https://review.openstack.org/102716
> > [2] https://review.openstack.org/102719
> > [3] https://review.openstack.org/102738
> > 
> >
> > On Jun 25, 2014, at 9:51 PM, Matthew Oliver  > > wrote:
> >
> >>
> >> On Jun 26, 2014 12:12 PM, "Angus Salkeld"  >> > wrote:
> >> >
> > On 25/06/14 15:13, Clark Boylan wrote:
> >> On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
> >>> mailto:adrian.o...@rackspace.com>> wrote:
> >>> Hello,
> >>>
> >>> Solum has run into a constraint with the current scheme for
> >>> requirements management within the OpenStack CI system. We have a
> >>> proposal for dealing with this constraint that involves making a
> >>> contribution to openstack-infra. This message explains the constraint,
> >>> and our proposal for addressing it.
> >>>
> >>> == Background ==
> >>>
> >>> OpenStack uses a list of global requirements in the requirements
> >>> repo[1], and each project has it’s own requirements.txt and
> >>> test-requirements.txt files. The requirements are satisfied by gate
> >>> jobs using pip configured to use the pypi.openstack.org
> >>>  mirror, which is periodically updated
> >>> with new content from pypi.python.org . One
> >>> motivation for doing this is that pypi.python.org
> >>>  may not be as fast or as reliable as a local
> >>> mirror. The gate/check jobs for the projects use the OpenStack
> >>> internal pypi mirror to ensure stability.
> >>>
> >>> The OpenStack CI system will sync up the requirements across all
> >>> the official projects and will create reviews in the participating
> >>> projects for any mis-matches. Solum is one of these projects, and
> >>> enjoys this feature.
> >>>
> >>> Another motivation is so that users of OpenStack will have one
> >>> single set of python package requirements/dependencies to install and
> >>> run the individual OpenStack components.
> >>>
> >>> == Problem ==
> >>>
> >>> Stackforge projects listed in openstack/requirements/projects.txt
> >>> that decide to depend on each other (for example, Solum wanting to
> >>> list mistralclient as a requirement) are unable to, because they are
> >>> not yet integrated, and are not listed in
> >>> openstack/requirements/global-requirements.txt yet. This means that in
> >>> order to depend on each other, a project 

Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Zane Bitter

On 07/07/14 18:13, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-07-07 14:25:50 -0700:


>I'd like to propose a policy that we bump the revision of HOT whenever
>we make a change from the previous stable version, and that we declare
>the new version stable at the end of each release cycle. Maybe we can
>post-date it to indicate the policy more clearly. (I'd also like to
>propose that the Juno version drops cfn-style function support.)


Agreed. I'm also curious if we're going to reject a template with
version 2013-05-23 that includes list_join. If we don't reject it, we
probably need to look at how to show the user warnings about
version/feature skew.


To be clear, my proposal is that we would reject it.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-07 Thread Joe Gordon
On Jul 7, 2014 9:50 AM, "Lisa"  wrote:
>
> Hi all,
>
> during the last IRC meeting, for better understanding our proposal (i.e
the FairShareScheduler), you suggested us to provide (for the tomorrow
meeting) a document which fully describes our use cases. Such document is
attached to this e-mail.
> Any comment and feedback is welcome.

The attached document was very helpful, than you.

It sounds like Amazon's concept of spot instances ( as a user facing
abstraction) would solve your use case in its entirety. I see spot
instances as the general solution to the question of how to keep a cloud at
full utilization. If so then perhaps we can refocus this discussion on the
best way for Openstack to support Amazon style spot instances.

> Thanks a lot.
> Cheers,
> Lisa
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Emails sent to publicly archived mailing lists are *NOT* confidential Re: [third-party-ci][neutron] What is "Success" exactly?

2014-07-07 Thread Stefano Maffulli
> The contents of this message and any attachments to it are
> confidential and may be legally privileged. If you have received this
> message in error you should delete it from your system immediately
> and advise the sender.
> 
> To any recipient of this message within HP, unless otherwise stated,
> you should consider this message and attachments as "HP
> CONFIDENTIAL".

Someone has to stop this legally sounding silliness, I thought we were
done with it in 2007: it's your decision to send an email to a public
mailing list, which is archived and indexed in any possible way. Nobody
can consider messages sent to a mailing list confidential.

Warning of this kind sent to mailing lists (at least for those hosted on
lists.openstack.org) are:

 a) useless
 b) annoying
 c) too long (I think it's still good practice to keep email signatures
under 5 lines)

Just remove them please.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Ian Wells
On 7 July 2014 12:29, Scott Moser  wrote:

>
> > I'd honestly love to see us just deprecate the metadata server.
>
> If I had to deprecate one or the other, I'd deprecate config drive.  I do
> realize that its simplicity is favorable, but not if it is insufficient.
>

The question of deprecation is one of what we can get away with changing in
the contract with VMs.  At the moment VMs don't have to do DHCP.  For as
long as that's true (and, at least with some of the VMs I work with, even
enabling DHCP involves sneaking in some config, so I would prefer that to
always be true) then we need config-drives as well as metadata servers.
(You can emulate it with cinder volumes, but they're short lived and the
whole thing's a bit horrid.  There's no other convenient way I know of
where you can make a disk that has initial content that is VM-specific and
with a VM lifetime.)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-07-07 14:25:50 -0700:
> With the Icehouse release we announced that there would be no further 
> backwards-incompatible changes to HOT without a revision bump. However, 
> I notice that we've already made an upward-incompatible change in Juno:
> 
> https://review.openstack.org/#/c/102718/
> 
> So a user will be able to create a valid template for a Juno (or later) 
> version of Heat with the version
> 
>heat_template_version: 2013-05-23
> 
> but the same template may break on an Icehouse installation of Heat with 
> the "stable" HOT parser. IMO this is almost equally as bad as breaking 
> backwards compatibility, since a user moving between clouds will 
> generally have no idea whether they are going forward or backward in 
> version terms.

Sounds like a bug in Juno that we need to fix. I agree, this is a new
template version.

> 
> (Note: AWS don't use the version field this way, because there is only 
> one AWS and therefore in theory they don't have this problem. This 
> implies that we might need a more sophisticated versioning system.)
> 

A good manual with a "this was introduced in version X" and "this was
changed in version Y" would, IMO be enough, to help users not go crazy
and help us know whether something is a bug or not. We can probably
achieve this entirely in the in-code template guide.

> I'd like to propose a policy that we bump the revision of HOT whenever 
> we make a change from the previous stable version, and that we declare 
> the new version stable at the end of each release cycle. Maybe we can 
> post-date it to indicate the policy more clearly. (I'd also like to 
> propose that the Juno version drops cfn-style function support.)

Agreed. I'm also curious if we're going to reject a template with
version 2013-05-23 that includes list_join. If we don't reject it, we
probably need to look at how to show the user warnings about
version/feature skew.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Brandon Logan
I'll +1 UNBOUND or DEFERRED status.  QUEUED does have a kind of implication 
that it will be provisioned without any further action whereas UNBOUND or 
DEFERRED imply that another action must take place for it to actually be 
provisioned.

Thanks,
Brandon

From: Jorge Miramontes [jorge.miramon...@rackspace.com]
Sent: Monday, July 07, 2014 12:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not 
exist in a driver backend

Hey Mark,

To add, one reason we have a DELETED status at Rackspace is that certain
sub-resources are still relevant to our customers. For example, we have a
usage sub-resource which reveals usage records for the load balancer. To
illustrate, a user issues a DELETE on /loadbalancers/ but can still
issue a GET on /loadbalancers//usage. If /loadbalancers/ were
truly deleted (i.e a 404 is returned) it wouldn't make RESTful sense to
expose the usage sub-resource. Furthermore, even if we don't plan on
having sub-resources that a user will actually query I would still like a
DELETED status as our customers use it for historical and debugging
purposes. It provides users with a sense of clarity and doesn't leave them
scratching their heads thinking, "How were those load balancers configured
when we had that issue the other day?" for example.

I agree on your objection for unattached objects assuming API operations
for these objects will be synchronous in nature. However, since the API is
suppose to be asynchronous a QUEUED status will make sense for the API
operations that are truly asynchronous. In an earlier email I stated that
a QUEUED status would be beneficial when compared to just a BUILD status
because it would allow for more accurate metrics in regards to
provisioning time. Customers will complain more if it appears provisioning
times are taking a long time when in reality they are actually queued do
to high API traffic.

Thoughts?

Cheers,
--Jorge




On 7/7/14 9:32 AM, "Mark McClain"  wrote:

>
>On Jul 4, 2014, at 5:27 PM, Brandon Logan 
>wrote:
>
>> Hi German,
>>
>> That actually brings up another thing that needs to be done.  There is
>> no DELETED state.  When an entity is deleted, it is deleted from the
>> database.  I'd prefer a DELETED state so that should be another feature
>> we implement afterwards.
>>
>> Thanks,
>> Brandon
>>
>
>This is an interesting discussion since we would create an API
>inconsistency around possible status values.  Traditionally, status has
>been be fabric status and we have not always well defined what the values
>should mean to tenants.  Given that this is an extension, I think that
>adding new values would be ok (Salvatore might have a different opinion
>than me).
>
>Right we¹ve never had a deleted state because the record has been removed
>immediately in most implementations even if the backend has not fully
>cleaned up.  I was thinking for the v3 core we should have a DELETING
>state that is set before cleanup is dispatched to the backend
>driver/worker.  The record can then be deleted when the backend has
>cleaned up.
>
>For unattached objects, I¹m -1 on QUEUED because some will interpret that
>the system is planning to execute immediate operations on the resource
>(causing customer queries/complaints about why it has not transitioned).
>Maybe use something like DEFERRED, UNBOUND, or VALIDATED?
>
>mark
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-07 Thread Zane Bitter
With the Icehouse release we announced that there would be no further 
backwards-incompatible changes to HOT without a revision bump. However, 
I notice that we've already made an upward-incompatible change in Juno:


https://review.openstack.org/#/c/102718/

So a user will be able to create a valid template for a Juno (or later) 
version of Heat with the version


  heat_template_version: 2013-05-23

but the same template may break on an Icehouse installation of Heat with 
the "stable" HOT parser. IMO this is almost equally as bad as breaking 
backwards compatibility, since a user moving between clouds will 
generally have no idea whether they are going forward or backward in 
version terms.


(Note: AWS don't use the version field this way, because there is only 
one AWS and therefore in theory they don't have this problem. This 
implies that we might need a more sophisticated versioning system.)


I'd like to propose a policy that we bump the revision of HOT whenever 
we make a change from the previous stable version, and that we declare 
the new version stable at the end of each release cycle. Maybe we can 
post-date it to indicate the policy more clearly. (I'd also like to 
propose that the Juno version drops cfn-style function support.)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2014-07-07 10:41:34 -0700:
> So I've been thinking how to respond to this email, and here goes (shields
> up!),
> 
> First things first; thanks mark and victor for the detailed plan and
> making it visible to all. It's very nicely put together and the amount of
> thought put into it is great to see. I always welcome an effort to move
> toward a new structured & explicit programming model (which asyncio
> clearly helps make possible and strongly encourages/requires).
> 

I too appreciate the level of detail in the proposal. I think I
understand where it wants to go.

> So now to some questions that I've been thinking about how to
> address/raise/ask (if any of these appear as FUD, they were not meant to
> be):
> 
> * Why focus on a replacement low level execution model integration instead
> of higher level workflow library or service (taskflow, mistral... other)
> integration?
> 
> Since pretty much all of openstack is focused around workflows that get
> triggered by some API activated by some user/entity having a new execution
> model (asyncio) IMHO doesn't seem to be shifting the needle in the
> direction that improves the scalability, robustness and crash-tolerance of
> those workflows (and the associated projects those workflows are currently
> defined & reside in). I *mostly* understand why we want to move to asyncio
> (py3, getting rid of eventlet, better performance? new awesomeness...) but
> it doesn't feel that important to actually accomplish seeing the big holes
> that openstack has right now with scalability, robustness... Let's imagine
> a different view on this; if all openstack projects declaratively define
> the workflows there APIs trigger (nova is working on task APIs, cinder is
> getting there to...), and in the future when the projects are *only*
> responsible for composing those workflows and handling the API inputs &
> responses then the need for asyncio or other technology can move out from
> the individual projects and into something else (possibly something that
> is being built & used as we speak). With this kind of approach the
> execution model can be an internal implementation detail of the workflow
> 'engine/processor' (it will also be responsible for fault-tolerant, robust
> and scalable execution). If this seems reasonable, then why not focus on
> integrating said thing into openstack and move the projects to a model
> that is independent of eventlet, asyncio (or the next greatest thing)
> instead? This seems to push the needle in the right direction and IMHO
> (and hopefully others opinions) has a much bigger potential to improve the
> various projects than just switching to a new underlying execution model.
> 
> * Was the heat (asyncio-like) execution model[1] examined and learned from
> before considering moving to asyncio?
> 
> I will try not to put words into the heat developers mouths (I can't do it
> justice anyway, hopefully they can chime in here) but I believe that heat
> has a system that is very similar to asyncio and coroutines right now and
> they are actively moving to a different model due to problems in part due
> to using that coroutine model in heat. So if they are moving somewhat away
> from that model (to a more declaratively workflow model that can be
> interrupted and converged upon [2]) why would it be beneficial for other
> projects to move toward the model they are moving away from (instead of
> repeating the issues the heat team had with coroutines, ex, visibility
> into stack/coroutine state, scale limitations, interruptibility...)?
> 

I'd like to hear Zane's opinions as he developed the rather light weight
code that we use. It has been quite a learning curve for me but I do
understand how to use the task scheduler we have in Heat now.

Heat's model is similar to asyncio, but is entirely limited in scope. I
think it has stayed relatively manageable because it is really only used
for a few explicit tasks where a high degree of concurrency makes a lot
of sense. We are not using it for I/O concurrency (eventlet still does
that) but rather for request concurrency. So we tell nova to boot 100
servers with 100 coroutines that have 100 other coroutines to block
further execution until those servers are active. We are by no means
using it as a general purpose concurrency programming model.

That said, as somebody working on the specification to move toward a
more taskflow-like (perhaps even entirely taskflow-based) model in Heat,
I think that is the way to go. The fact that we already have an event
loop that doesn't need to be explicit except at the very lowest levels
makes me want to keep that model. And we clearly need help with how to
define workflows, which something like taskflow will do nicely.

>   
>   * A side-question, how do asyncio and/or trollius support debugging, do
> they support tracing individual co-routines? What about introspecting the
> state a coroutine has associated with it? Eventlet at leas

Re: [openstack-dev] [Heat] [Marconi] Heat and concurrent signal processing needs some deep thought

2014-07-07 Thread Ken Wronkiewicz
Yeah, this is really sticky when it comes to Auto Scaling.  Because you 
probably want a "Oh, we've got more load than expected, scale up a bit" policy 
and an "OMG! REDDIT FRONT PAGE!" policy.

The right thing is probably to be prepared to execute multiple concurrent 
signals simultaneously.  That's what Rackspace Auto Scale does.

And, yes, based on everything I've seen from customers using it in production, 
signals must not error unless your entire infrastructure is dispersing pieces 
of itself in disorderly pieces all over the floor.  Especially in the public 
cloud space, people just want a webhook URL and even doing OpenStack auth is 
too much trouble.

OTOH, I'm pretty sure that I'm fixated on the palatable and concrete.  A n+2 
policy and an n+5 policy becomes an n+7 policy when you trigger both.  The 
right answer is "Convergence" :)  Even a n-2 policy banged up against a n+5 
policy can be explained without much ambiguity.

But then we start to talk about concurrently signalling different types of 
things, this model falls apart.  A scaling webhook triggers in the middle of a 
deploy, for example.

I suspect that the user interest is properly represented by the "execute in 
parallel if you can, queue if you can't" logic case.

Except for the Ouroboros bros. (pun intended) How long is it till feature creep 
gives us a chain of events that makes that logic give us a nice deadlock?  
Either signals need to be only user-facing (which breaks the idea that user 
requests are handled just like proxied-user-requests from other systems) or all 
y'all get to flash back to the joys of reference counted garbage collection.

Yeah.  This could use some deepthink. Rest assured, our users will trigger 
interesting behaviors. :)

From: Clint Byrum [cl...@fewbar.com]
Sent: Monday, July 07, 2014 11:52 AM
To: openstack-dev
Subject: [openstack-dev] [Heat] [Marconi] Heat and concurrent signal
processing needs some deep thought

I just noticed this review:

https://review.openstack.org/#/c/90325/

And gave it some real thought. This will likely break any large scale
usage of signals, and I think breaks the user expectations. Nobody expects
to get a failure for a signal. It is one of those things that you fire and
forget. "I'm done, deal with it." If we start returning errors, or 409's
or 503's, I don't think users are writing their in-instance initialization
tooling to retry. I think we need to accept it and reliably deliver it.

Does anybody have any good ideas for how to go forward with this? I'd
much rather borrow a solution from some other project than try to invent
something for Heat.

I've added Marconi as I suspect there has already been some thought put
into how a user-facing set of tools would send messages.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-07 Thread Steve Baker
On 07/07/14 20:37, Steven Hardy wrote:
> Hi all,
>
> Recently I've been adding review comments, and having IRC discussions about
> changes to update behavior for CloudFormation compatible resources.
>
> In several cases, folks have proposed patches which allow non-destructive
> update of properties which are not allowed on AWS (e.g which would result
> in destruction of the resource were you to run the same template on CFN).
>
> Here's an example:
>
> https://review.openstack.org/#/c/98042/
>
> Unfortunately, I've not spotted all of these patches, and some have been
> merged, e.g:
>
> https://review.openstack.org/#/c/80209/
>
> Some folks have been arguing that this minor deviation from the AWS
> documented behavior is OK.  My argument is that is definitely is not,
> because if anyone who cares about heat->CFN portability develops a template
> on heat, then runs it on CFN a non-destructive update suddenly becomes
> destructive, which is a bad surprise IMO.
>
> I think folks who want the more flexible update behavior should simply use
> the native resources instead, and that we should focus on aligning the CFN
> compatible resources as closely as possible with the actual behavior on
> CFN.
>
> What are peoples thoughts on this?
>
> My request, unless others strongly disagree, is:
>
> - Contributors, please check the CFN docs before starting a patch
>   modifying update for CFN compatible resources
> - heat-core, please check the docs and don't approve patches which make
>   heat behavior diverge from that documented for CFN.
>
> The AWS docs are pretty clear about update behavior, they can be found
> here:
>
> http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
>
> The other problem, if we agree that aligning update behavior is desirable,
> is what we do regarding deprecation for existing diverged update behavior?
>
I've flagged a few AWS incompatible enhancements too.

I think any deviation from AWS compatibility should be considered a bug.
For each change we just need to evaluate whether users are depending on
a given non-AWS behavior to decide on a deprecation strategy.

For update-able properties I'd be inclined to just fix them. For
heat-specific properties/attributes we should flag them as deprecated
for a cycle and the deprecation message should encourage switching to
the native heat resource.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-07 Thread Joe Gordon
On Jul 3, 2014 8:57 AM, "Anita Kuno"  wrote:
>
> On 07/03/2014 06:22 AM, Sullivan, Jon Paul wrote:
> >> -Original Message-
> >> From: Anita Kuno [mailto:ante...@anteaya.info]
> >> Sent: 01 July 2014 14:42
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is
"Success"
> >> exactly?
> >>
> >> On 06/30/2014 09:13 PM, Jay Pipes wrote:
> >>> On 06/30/2014 07:08 PM, Anita Kuno wrote:
>  On 06/30/2014 04:22 PM, Jay Pipes wrote:
> > Hi Stackers,
> >
> > Some recent ML threads [1] and a hot IRC meeting today [2] brought
> > up some legitimate questions around how a newly-proposed
> > Stackalytics report page for Neutron External CI systems [2]
> > represented the results of an external CI system as "successful" or
> >> not.
> >
> > First, I want to say that Ilya and all those involved in the
> > Stackalytics program simply want to provide the most accurate
> > information to developers in a format that is easily consumed. While
> > there need to be some changes in how data is shown (and the wording
> > of things like "Tests Succeeded"), I hope that the community knows
> > there isn't any ill intent on the part of Mirantis or anyone who
> > works on Stackalytics. OK, so let's keep the conversation civil --
> > we're all working towards the same goals of transparency and
> > accuracy. :)
> >
> > Alright, now, Anita and Kurt Taylor were asking a very poignant
> > question:
> >
> > "But what does CI tested really mean? just running tests? or tested
> > to pass some level of requirements?"
> >
> > In this nascent world of external CI systems, we have a set of
> > issues that we need to resolve:
> >
> > 1) All of the CI systems are different.
> >
> > Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> > scripts. Others run custom Python code that spawns VMs and publishes
> > logs to some public domain.
> >
> > As a community, we need to decide whether it is worth putting in the
> > effort to create a single, unified, installable and runnable CI
> > system, so that we can legitimately say "all of the external systems
> > are identical, with the exception of the driver code for vendor X
> > being substituted in the Neutron codebase."
> >
> > If the goal of the external CI systems is to produce reliable,
> > consistent results, I feel the answer to the above is "yes", but I'm
> > interested to hear what others think. Frankly, in the world of
> > benchmarks, it would be unthinkable to say "go ahead and everyone
> > run your own benchmark suite", because you would get wildly
> > different results. A similar problem has emerged here.
> >
> > 2) There is no mediation or verification that the external CI system
> > is actually testing anything at all
> >
> > As a community, we need to decide whether the current system of
> > self-policing should continue. If it should, then language on
> > reports like [3] should be very clear that any numbers derived from
> > such systems should be taken with a grain of salt. Use of the word
> > "Success" should be avoided, as it has connotations (in English, at
> > least) that the result has been verified, which is simply not the
> > case as long as no verification or mediation occurs for any external
> >> CI system.
> >
> > 3) There is no clear indication of what tests are being run, and
> > therefore there is no clear indication of what "success" is
> >
> > I think we can all agree that a test has three possible outcomes:
> > pass, fail, and skip. The results of a test suite run therefore is
> > nothing more than the aggregation of which tests passed, which
> > failed, and which were skipped.
> >
> > As a community, we must document, for each project, what are
> > expected set of tests that must be run for each merged patch into
> > the project's source tree. This documentation should be discoverable
> > so that reports like [3] can be crystal-clear on what the data shown
> > actually means. The report is simply displaying the data it receives
> > from Gerrit. The community needs to be proactive in saying "this is
> > what is expected to be tested." This alone would allow the report to
> > give information such as "External CI system ABC performed the
> >> expected tests. X tests passed.
> > Y tests failed. Z tests were skipped." Likewise, it would also make
> > it possible for the report to give information such as "External CI
> > system DEF did not perform the expected tests.", which is excellent
> > information in and of itself.
> >
> > ===
> >
> > In thinking about the likely answers to the above questions, I
> > believe it would be prudent to change the Stackalytics report in
> > 

Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Nikhil Manchanda

Denis Makogon writes:

> On Mon, Jul 7, 2014 at 3:40 PM, Amrith Kumar  wrote:
>
>> Denis Makogon (dmako...@mirantis.com) writes:
>>
> [...]
>>
>> I think it is totally ludicrous (and to all the technical writers who work
>> on OpenStack, downright offensive) to say the “docs are useless”. Not only
>> have I been able to install and successfully operate a OpenStack
>> installation by (largely) following the documentation, but
>> “trove-integration” and “redstack” are useful for developers but I would
>> highly doubt that a production deployment of Trove would use ‘redstack’.
>>
>
> Amrith, those doc doesn't reflect any post-deployment steps, even more, doc
> still suggesting to use trove-cli that was deprecated long time ago. I do
> agree that trove-integration project can't be used as production deployment
> system, but first try-outs - more than enough.
>

I think we're doing ourselves a great disservice here by calling our own
docs "useless", and yet not taking any steps to correct the issue at
hand. If the docs are missing certain key steps we need to ensure that
they are updated and improved as we go along. I, for one, am pretty
certain that the image building and post-deploy steps for trove do work,
since that's what we currently use in the image build job and the
integration tests, and they've been passing pretty consistently.

I'm going to add a "Scrub the Docs" agenda item to this week's Trove
meeting. I'd really like those of us who have a few cycles to clean-up
the docs we have, and make sure that information in them is
up-to-date. I can help lead this effort.


> FYI, Trove is not fully integrated with devstack, so, personally i'd
> suggest to use https://github.com/openstack/trove-integration[3]  simple (3
> clicks) Trove + DevStack deployment.

This is no longer true. We've recently done a lot of work here to get
Trove fully integrated with devstack. A trove guest-image is now
uploaded into glance as part of the trove install in devstack.
Appropriate default datastores, and versions are also created, making
the trove install fully usable.

If you're running trove functional tests, you probably want to stick
with trove-integration for now, since it sets up some test conf values
that are needed for the functional tests. But if all you need to do is
to take trove for an end-to-end test-run, a devstack install with the
trove services enabled should be sufficient.


Syed, if you're still having trouble with any of this, pleas feel free
to reach out to me or anyone else on #openstack-trove. We'd be happy
to assist you with any issues you might have encountered.

Hope this helps,

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy around Requirements Adds (was: New class of requirements for Stackforge projects)

2014-07-07 Thread Sean Dague
This thread was unfortunately hidden under a project specific tag (I
have thus stripped all the tags).

The crux of the argument here is the following:

Is a stackforge project project able to propose additions to
global-requirements.txt that aren't used by any projects in OpenStack.

I believe the answer is firmly *no*.

global-requirements.txt provides a way for us to have a single point of
vetting for requirements for OpenStack. It lets us assess licensing,
maturity, current state of packaging, python3 support, all in one place.
And it lets us enforce that integration of OpenStack projects all run
under a well understood set of requirements.

The requirements sync that happens after requirements land is basically
just a nicety for getting openstack projects to the tested state by
eventual consistency.

If a stackforge project wants to be limited by global-requirements,
that's cool. We have a mechanism for that. However, they are accepting
that they will be limited by it. That means they live with how the
OpenStack project establishes that list. It specifically means they
*don't* get to propose any new requirements.

Basically in this case Solum wants to have it's cake and eat it to. Both
be enforced on requirements, and not be enforced. Or some 3rd thing that
means the same as that.

The near term fix is to remove solum from projects.txt.

On 06/26/2014 02:00 AM, Adrian Otto wrote:
> Ok,
> 
> I submitted and abandoned a couple of reviews[1][2] for a solution aimed
> to meet my goals without adding a new per-project requirements file. The
> flaw with this approach is that pip may install other requirements when
> installing the one(s) loaded from the fallback mirror, and those may
> conflict with the ones loaded from the primary mirror.
> 
> After discussing this further in #openstack-infra this evening, we
> should give serious consideration to adding python-mistralclient to
> global requirements. I have posted a review[3] for that to get input
> from the requirements review team.
> 
> Thanks,
> 
> Adrian
> 
> [1] https://review.openstack.org/102716
> [2] https://review.openstack.org/102719
> [3] https://review.openstack.org/102738
> 
> 
> On Jun 25, 2014, at 9:51 PM, Matthew Oliver  > wrote:
> 
>>
>> On Jun 26, 2014 12:12 PM, "Angus Salkeld" > > wrote:
>> >
> On 25/06/14 15:13, Clark Boylan wrote:
>> On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
>>> mailto:adrian.o...@rackspace.com>> wrote:
>>> Hello,
>>>
>>> Solum has run into a constraint with the current scheme for
>>> requirements management within the OpenStack CI system. We have a
>>> proposal for dealing with this constraint that involves making a
>>> contribution to openstack-infra. This message explains the constraint,
>>> and our proposal for addressing it.
>>>
>>> == Background ==
>>>
>>> OpenStack uses a list of global requirements in the requirements
>>> repo[1], and each project has it’s own requirements.txt and
>>> test-requirements.txt files. The requirements are satisfied by gate
>>> jobs using pip configured to use the pypi.openstack.org
>>>  mirror, which is periodically updated
>>> with new content from pypi.python.org . One
>>> motivation for doing this is that pypi.python.org
>>>  may not be as fast or as reliable as a local
>>> mirror. The gate/check jobs for the projects use the OpenStack
>>> internal pypi mirror to ensure stability.
>>>
>>> The OpenStack CI system will sync up the requirements across all
>>> the official projects and will create reviews in the participating
>>> projects for any mis-matches. Solum is one of these projects, and
>>> enjoys this feature.
>>>
>>> Another motivation is so that users of OpenStack will have one
>>> single set of python package requirements/dependencies to install and
>>> run the individual OpenStack components.
>>>
>>> == Problem ==
>>>
>>> Stackforge projects listed in openstack/requirements/projects.txt
>>> that decide to depend on each other (for example, Solum wanting to
>>> list mistralclient as a requirement) are unable to, because they are
>>> not yet integrated, and are not listed in
>>> openstack/requirements/global-requirements.txt yet. This means that in
>>> order to depend on each other, a project must withdraw from
>>> projects.txt and begin using pip with pypi.poython.org
>>>  to satisfy all of their requirements.I
>>> strongly dislike this option.
>>>
>>> Mistral is still evolving rapidly, and we don’t think it makes
>>> sense for them to pursue integration wight now. The upstream
>>> distributions who include packages to support OpenStack will also
>>> prefer not to deal with a requirement that will be cutting a new
>>> version every week or two in order to satisfy evolving needs as Solum
>>> and other consumers of Mistral help refine how it works.
>>>
>>> == Proposal =

[openstack-dev] [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-07 Thread Collins, Sean
Hi,

I am currently at a book sprint and will probably not be able to run the
meeting, if someone could volunteer to chair the meeting and run it,
that would be great. 

Any takers?
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Matt Riedemann



On 7/7/2014 3:28 PM, Jay Pipes wrote:



On 07/07/2014 04:17 PM, Mike Bayer wrote:


On 7/7/14, 3:57 PM, Matt Riedemann wrote:




Regarding the eventlet + mysql sadness, I remembered this [1] in the
nova.db.api code.

I'm not sure if that's just nova-specific right now, I'm a bit too
lazy at the moment to check if it's in other projects, but I'm not
seeing it in neutron, for example, and makes me wonder if it could
help with the neutron db lock timeouts we see in the gate [2].  Don't
let the bug status fool you, that thing is still showing up, or a
variant of it is.

There are at least 6 lock-related neutron bugs hitting the gate [3].

[1] https://review.openstack.org/59760
[2] https://bugs.launchpad.net/neutron/+bug/1283522
[3] http://status.openstack.org/elastic-recheck/



yeah, tpool, correct me if I'm misunderstanding, we take some API code
that is 90% fetching from the database, we have it all under eventlet,
the purpose of which is, IO can be shoveled out to an arbitrary degree,
e.g. 500 concurrent connections type of thing, but then we take all the
IO (MySQL access) and put it into a thread pool anyway.


Yep. It makes no sense to do that, IMO.

The solution is to use a non-blocking MySQLdb library which will yield
appropriately for evented solutions like gevent and eventlet.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, nevermind my comment, since it's not working without an eventlet 
patch, details in the nova bug here [1].  And it sounds like it's still 
not 100% with the patch.


[1] https://bugs.launchpad.net/nova/+bug/1171601

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Audit Log

2014-07-07 Thread Jay Pipes

On 07/04/2014 07:56 AM, Noorul Islam K M wrote:


Hello all,

I was looking for audit logs in nova. I found [1] but could not find the
launchpad entry audit-logging as mentioned in the wiki page.

Is this yet to be implemented or am I looking at the wrong place?


The audit logging functionality is being removed in Juno. It was a 
NASA-specific thing to begin with and really belonged as a notification 
system, not a logging level.


See here for the blueprint:

https://review.openstack.org/#/c/91446/4/specs/juno/log-guidelines.rst

Look for the section "Deprecate and remove AUDIT level"

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Discussion of capabilities feature

2014-07-07 Thread Joe Gordon
On Jul 3, 2014 6:38 PM, "Doug Shelley"  wrote:
>
> Iccha,
>
>
>
> Thanks for the feedback. I guess I should have been more specific – my
intent here was to layout use cases and requirements and not talk about
specific implementations. I believe that if we can get agreement on the
requirements, it will be easier to review/discuss design/implementation
choices. Some of your comments are specific to how one might chose to
implement against these requirements – I think we should defer those
questions until we gain some agreement on requirements.
>
>
>
> More feedback below…marked with [DAS]
>
>
> Regards,
>
> Doug
>
>
>
> From: Iccha Sethi [mailto:iccha.se...@rackspace.com]
> Sent: July-03-14 4:36 PM
>
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [trove] Discussion of capabilities feature
>
>
>
> Hey Doug,
>
>
>
> Thank you so much for putting this together. I have some
questions/clarifications(inline) which would be useful to be addressed in
the spec.
>
>
>
>
>
> From: Doug Shelley 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
> Date: Thursday, July 3, 2014 at 2:20 PM
> To: "OpenStack Development Mailing List (not for usage questions) (
openstack-dev@lists.openstack.org)" 
> Subject: [openstack-dev] [trove] Discussion of capabilities feature
>
>
>
> At yesterday's Trove team meeting [1] there was significant discussion
around the Capabilities [2] feature. While the community previously
approved a BP and some of the initial implementation, it is apparent now
that there is no agreement in the community around the requirements, use
cases or proposed implementation.
>
>
>
> I mentioned in the meeting that I thought it would make sense to adjust
the current BP and spec to reflect the concerns and hopefully come up with
something that we can get consensus on. Ahead of this, I thought it
would to try to write up some of the key points and get some feedback here
before updating the spec.
>
>
>
> First, here are what I think the goals of the Capabilities feature are:
>
> 1. Provide other components with a mechanism for understanding which
aspects of Trove are currently available and/or in use
>
> >> Good point about communicating to other components. We can highlight
how this would help other projects like horizon dynamically modify their UI
based on the api response.
>
> [DAS] Absolutely
>
>
>
> [2] "This proposal includes the ability to setup different capabilities
for different datastore versions. “ So capabilities is specific to data
stores/datastore versions and not for trove in general right?
>
>
>
> [DAS] This is from the original spec – I kind of pushed the reset to make
sure we understand the requirements at this point. Although what the
requirements below contemplate is certainly oriented around datastore
managers/datastores and versions.
>
>
>
> Also it would be useful for us as a community to maybe lay some ground
rules for what is a capability and what is not in the spec. For example,
how to distinguish what goes in
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L273 as
a config value and what does not.
>
> [DAS] Hopefully this will become clearer through this process
>
>
>
> 2. Allow operators the ability to control some aspects of Trove at
deployment time
>
> >> If we are controlling the aspects at deploy time what advantages do
having tables like capabilities and capabilities_overrides offer over
having in the config file under the config groups for different data stores
like [mysql][redis] etc? I think it would be useful to document these
answers because they might keep resurfacing in the future.
>
> [DAS] Certainly at the time the design/implementation is fleshed out
these choices would be relevant to be discussed.
>
> Also want to make sure we are not trying to solve the problem of config
override during run time here because that is an entirely different problem
not in scope here.
>
>
>
> Use Cases
>
>
>
> 1. Unimplemented feature - this is the case where one/some datastore
managers provide support for some specific capability but others don't. A
good example would be replication support as we are only planning to
support the MySQL manager in the first version. As other datastore managers
gain support for the capability, these would be enabled.
>
> 2. Unsupported feature - similar to #1 except this would be the case
where the datastore manager inherently doesn't support the capability. For
example, Redis doesn't have support for volumes.
>
> 3. Operator controllable feature - this would be a capability that can be
controlled at deployment time at the option of the operator. For example,
whether to provide access to the root user on instance creation.
>
> >> Are not 1 and 2 set at deploy time as well?
>
> [DAS] I see 1 and 2 and basically baked into a particular version of the
product and provided at run time.
>
>
>
> 4. Downstream capabilities addition - basica

Re: [openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-07 Thread Dolph Mathews
On Mon, Jul 7, 2014 at 2:51 PM, Anne Gentle  wrote:

>
>
>
> On Mon, Jul 7, 2014 at 2:43 PM, Steve Martinelli 
> wrote:
>
>> >>1) We already have identity-api, which will need to be updated once the
>> spec is completed anyway.
>>
>> >So my thinking is to merge the content of openstack/identity-api into
>> openstack/keystone-specs. We use identity-api just like we use
>> keystone-specs anyway, but only for a subset of >our work.
>>
>> I think that would solve a lot of the issues I'm having with the current
>> spec-process. I really don't want to have the same content being managed in
>> two different places (even though the specs content probably won't be
>> managed). Chalk it up to another discussion at the hackathon.
>>
>
> I like this idea too. Ideally we'd convince the rest of the projects to
> treat theirs the same way. It seems like the replication of info is what
> you want to avoid.
>

Replication leads to inconsistency, and that's exactly what I'd like to
avoid.


>
> We also have end-users relying on this information for creating and
> working on client tools and SDKs, though. I don't really want to publish
> end-user documentation out of -specs repos. So do you think there is
> sufficient information in the api-site repo for Identity API v3 for end
> users?
>

Why not? AFAICT, they're referring directly to openstack/identity-api for
the most part, because (and I assume you're referring to [1] when you say
"api-site") it's currently regarded as the source of truth for the API.
Although the API site is prettier, openstack/identity-api provides
human-readable documentation on the API, making the API site redundant,
unmaintained, and therefore inconsistent and out of date. The Identity v3
slice of the API site causes more pain than anything else for me, so my
perspective is probably biased here.

[1] http://developer.openstack.org/api-ref-identity-v3.html


> Thanks,
> Anne
>
>
>>
>>
>> Regards,
>>
>> *Steve Martinelli*
>> Software Developer - Openstack
>> Keystone Core Member
>> --
>>  *Phone:* 1-905-413-2851
>> * E-mail:* *steve...@ca.ibm.com* 
>> 8200 Warden Ave
>> Markham, ON L6G 1C7
>> Canada
>>
>>
>>
>>
>>
>> From:Dolph Mathews 
>> To:"OpenStack Development Mailing List (not for usage
>> questions)" ,
>> Date:07/07/2014 01:39 PM
>> Subject:Re: [openstack-dev] [keystone][specs] listing the entire
>> API in anew spec
>> --
>>
>>
>>
>>
>> On Fri, Jul 4, 2014 at 12:31 AM, Steve Martinelli <*steve...@ca.ibm.com*
>> > wrote:
>> To add to the growing pains of keystone-specs, one thing I've noticed is,
>> there is inconsistency in the 'REST API Impact' section.
>>
>> To be clear here, I don't mean we shouldn't include what new APIs will be
>> created, I think that is essential. But rather, remove the need to
>> specifically spell out the request and response blocks.
>>
>> Personally, I find it redundant for a few reasons:
>>
>> Agree, we need to eliminate the redundancy...
>>
>>
>>
>> 1) We already have identity-api, which will need to be updated once the
>> spec is completed anyway.
>>
>> So my thinking is to merge the content of openstack/identity-api into
>> openstack/keystone-specs. We use identity-api just like we use
>> keystone-specs anyway, but only for a subset of our work.
>>
>>
>> 2) It's easy to get bogged down in the spec review as it is, I don't want
>> to have to point out mistakes in the request/response blocks too (as I'll
>> need to do that when reviewing the identity-api patch anyway).
>>
>> I personally see value in having them proposed as one patchset - it's all
>> design work, so I think it should be approved as a cohesive piece of design.
>>
>>
>> 3) Come time to propose the identity-api patch, there might be
>> differences in what was proposed in the spec.
>>
>> There *shouldn't* be though... unless you're just talking about
>> typos/etc. It's possible to design an unimplementable or unusable API
>> though, and that can be discovered (at latest) by attempting an
>> implementation... at that point, I think it's fair to go back and revise
>> the spec/API with the solution.
>>
>>
>>
>> Personally I'd be OK with just stating the HTTP method and the endpoint.
>> Thoughts?
>>
>> Not all API-impacting changes introduce new endpoint/method combinations,
>> they may just add a new attribute to an existing resource - and this is
>> still a bit redundant with the identity-api repo.
>>
>>
>>
>> Many apologies in advance for my pedantic-ness!
>>
>> Laziness*
>>
>> (lazy engineers are just more efficient)
>>
>>
>> Regards,
>>
>> * Steve Martinelli*
>> Software Developer - Openstack
>> Keystone Core Member
>> --
>>  *Phone:* *1-905-413-2851* <1-905-413-2851>
>> * E-mail:* *steve...@ca.ibm.com* 
>> 8200 Warden Ave
>> Markham, ON L6G 1C7
>> Canada
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> *OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Joe Gordon
On Jul 7, 2014 3:47 PM, "Chris Friesen"  wrote:
>
> On 07/07/2014 12:35 PM, Day, Phil wrote:
>>
>> Hi Folks,
>>
>> I noticed a couple of changes that have just merged to allow the server
>> group hints to be specified by name (some legacy behavior around
>> automatically creating groups).
>>
>> https://review.openstack.org/#/c/83589/
>>
>> https://review.openstack.org/#/c/86582/
>>
>> But group names aren’t constrained to be unique, and the method called
>> to get the group instance_group_obj.InstanceGroup.get_by_name() will
>> just return the first group I finds with that name (which could be
>> either the legacy group or some new group, in which case the behavior is
>> going to be different from the legacy behavior I think ?
>>
>> I’m thinking that there may need to be some additional logic here, so
>> that group hints passed by name will fail if there is an existing group
>> with a policy that isn’t “legacy” – and equally perhaps group creation
>> needs to fail if a legacy groups exists with the same name ?
>
>
> Sorry, forgot to put this in my previous message.  I've been advocating
the ability to use names instead of UUIDs for server groups pretty much
since I saw them last year.
>
> I'd like to just enforce that server group names must be unique within a
tenant, and then allow names to be used anywhere we currently have UUIDs
(the way we currently do for instances).  If there is ambiguity (like from
admin doing an operation where there are multiple groups with the same name
in different tenants) then we can have it fail with an appropriate error
message.

The question here is not just about server group names, but all names.
Having one name be unique and not another (instance names), is a recipe for
a poor user experience. Unless there is a strong reason why our current
model is bad ( non unique names), I don't think this type of change is
worth the impact on users.

>
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Jay Pipes



On 07/07/2014 04:17 PM, Mike Bayer wrote:


On 7/7/14, 3:57 PM, Matt Riedemann wrote:




Regarding the eventlet + mysql sadness, I remembered this [1] in the
nova.db.api code.

I'm not sure if that's just nova-specific right now, I'm a bit too
lazy at the moment to check if it's in other projects, but I'm not
seeing it in neutron, for example, and makes me wonder if it could
help with the neutron db lock timeouts we see in the gate [2].  Don't
let the bug status fool you, that thing is still showing up, or a
variant of it is.

There are at least 6 lock-related neutron bugs hitting the gate [3].

[1] https://review.openstack.org/59760
[2] https://bugs.launchpad.net/neutron/+bug/1283522
[3] http://status.openstack.org/elastic-recheck/



yeah, tpool, correct me if I'm misunderstanding, we take some API code
that is 90% fetching from the database, we have it all under eventlet,
the purpose of which is, IO can be shoveled out to an arbitrary degree,
e.g. 500 concurrent connections type of thing, but then we take all the
IO (MySQL access) and put it into a thread pool anyway.


Yep. It makes no sense to do that, IMO.

The solution is to use a non-blocking MySQLdb library which will yield 
appropriately for evented solutions like gevent and eventlet.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Joe Gordon
On Mon, Jul 7, 2014 at 8:02 AM, Daniel P. Berrange 
wrote:

> On Mon, Jul 07, 2014 at 02:38:57PM +, Dugger, Donald D wrote:
> > Well, my main thought is that I would prefer to see the gantt split
> > done sooner rather than later.  The reality is that we've been trying
> > to split out the scheduler for months and we're still not there.  Until
> > we bite the bullet and actually do the split I'm afraid we'll still be
> > here discussing the `best` way to do the split at the K & L summits
> > (there's a little bit of `the perfect is the enemy of the good' happening
> > here).  With the creation of the client library we've created a good
> > seam to split out the scheduler, let's do the split and fix the remaining
> > problems (aggregates and instance group references).
>
> > To address some specific points:
>
> > 2)  We won't get around to creating parity between gantt and nova.  Gantt
> > will never be the default scheduler until it has complete parity with the
> > nova scheduler, that should give us sufficient incentive to make sure we
> > achieve parity as soon as possible.
>

> Although it isn't exactly the same situation, we do have history with
> Neutron/nova-network showing that kind of incentive to be insufficient
> to make the work actually happen. If Gantt remained a subset of features
> of the Nova scheduler, this might leave incentive to address the gaps,
> but I fear that other unrelated features will be added to Gantt that
> are not in Nova, and then we'll be back in the Neutron situation pretty
> quickly where both options have some features the other option lacks.
>
> > 3)  The split should be done at the beginning of the cycle.  I don't
> > see a need for that, we should do the split whenever we are ready.
> > Since gantt will be optional it shouldn't affect release issues with
> > nova and the sooner we have a separate tree the sooner people can test
> > and develop on the gantt tree.
>
> If we're saying Gantt is optional, this implies the existing Nova code
> is remaining. This seems to leave us with the neutron/nova-network
> situation again of maintaining two code bases again, and likely the
> people who were formerly fixing the bugs in nova scheduler codebase
> would be focused on gantt leaving the nova code to slowly bitrot.
>


I agree with Daniel, we should not make Gantt optional otherwise we risk
ending up in a neutron/nova-network scenario. IMHO the workflow from the
consumers point of view should be something along the lines of:

* In release X, nova-scheduler is deprecated and will be removed in N
cycles with Gantt as the default scheduler (along with a robust migration
strategy)
* In release X+N we delete nova-scheduler


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Mike Bayer

On 7/7/14, 3:57 PM, Matt Riedemann wrote:
>
>
>
> Regarding the eventlet + mysql sadness, I remembered this [1] in the
> nova.db.api code.
>
> I'm not sure if that's just nova-specific right now, I'm a bit too
> lazy at the moment to check if it's in other projects, but I'm not
> seeing it in neutron, for example, and makes me wonder if it could
> help with the neutron db lock timeouts we see in the gate [2].  Don't
> let the bug status fool you, that thing is still showing up, or a
> variant of it is.
>
> There are at least 6 lock-related neutron bugs hitting the gate [3].
>
> [1] https://review.openstack.org/59760
> [2] https://bugs.launchpad.net/neutron/+bug/1283522
> [3] http://status.openstack.org/elastic-recheck/


yeah, tpool, correct me if I'm misunderstanding, we take some API code
that is 90% fetching from the database, we have it all under eventlet,
the purpose of which is, IO can be shoveled out to an arbitrary degree,
e.g. 500 concurrent connections type of thing, but then we take all the
IO (MySQL access) and put it into a thread pool anyway.

Why are we doing this?   Assuming we currently have a 500+ concurrent DB
connections use case, has anyone demonstrated this actually working? 

Keep in mind, I'm a total dummy with async - other than the usual client
side JS/AJAX experience, I have very little "purely async API"
experience.But I've been putting out these questions on twitter and
elsewhere for some time and nobody is saying that I'm totally getting it
wrong.  I *should* be wrong.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Jay Pipes

On 07/02/2014 09:23 PM, Mike Bayer wrote:

I've just added a new section to this wiki, "MySQLdb + eventlet = sad",
summarizing some discussions I've had in the past couple of days about
the ongoing issue that MySQLdb and eventlet were not meant to be used
together.   This is a big one to solve as well (though I think it's
pretty easy to solve).

https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad


It's eminently solvable.

Facebook and Google engineers have already created a nonblocking MySQLdb 
fork here:


https://github.com/chipturner/MySQLdb1

We just need to test it properly, package it up properly and get it 
upstreamed into MySQLdb.


Best,
-jay


On 6/30/14, 12:56 PM, Mike Bayer wrote:

Hi all -

For those who don't know me, I'm Mike Bayer, creator/maintainer of
SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
I've become a full time Openstack developer working for Red Hat, given
the task of carrying Openstack's database integration story forward.
To that extent I am focused on the oslo.db project which going forward
will serve as the basis for database patterns used by other Openstack
applications.

I've summarized what I've learned from the community over the past month
in a wiki entry at:

https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy

The page also refers to an ORM performance proof of concept which you
can see at https://github.com/zzzeek/nova_poc.

The goal of this wiki page is to publish to the community what's come up
for me so far, to get additional information and comments, and finally
to help me narrow down the areas in which the community would most
benefit by my contributions.

I'd like to get a discussion going here, on the wiki, on IRC (where I am
on freenode with the nickname zzzeek) with the goal of solidifying the
blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
on as well as recruiting contributors to help in all those areas.  I
would welcome contributors on the SQLAlchemy / Alembic projects directly
as well, as we have many areas that are directly applicable to Openstack.

I'd like to thank Red Hat and the Openstack community for welcoming me
on board and I'm looking forward to digging in more deeply in the coming
months!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Joe Gordon
On Mon, Jul 7, 2014 at 7:53 AM, Sylvain Bauza  wrote:

> Le 07/07/2014 12:00, Michael Still a écrit :
> > I think you'd be better of requesting an exception for your spec than
> > splitting the scheduler immediately. These refactorings need to happen
> > anyways, and if your scheduler work diverges too far from nova then
> > we're going to have a painful time getting things back in sync later.
> >
> > Michael
>
>
> Hi Michael,
>
> Indeed, whatever the outcome of this discussion is, the problem is that
> the 2nd most important spec for isolating the scheduler
> (https://review.openstack.org/89893 ) is not yet approved, and we only
> have 3 days left.
>
>  There is a crucial architectural choice to be done in that spec
> so we need to find a consensus and make sure everybody is happy with
> that, as we can't go on a spec and later on discover that the
> implementation is having problems because of an unexpected issue 
>

Just like the rest of OpenStack the spec repos don't have nearly enough
reviewers. I don't even see any +1s  on that spec from anyone involved in
the gantt effort. If you would like to help get that spec approved we need
more reviewers in the nova-specs repo.


> -Sylvain
>
>
> > On Mon, Jul 7, 2014 at 5:28 PM, Sylvain Bauza  wrote:
> >> Le 04/07/2014 10:41, Daniel P. Berrange a écrit :
> >>> On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
>  On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
> > Hi,
> >
> > ==
> > tl; dr: A decision has been made to split out the scheduler to a
> > separate project not on a feature parity basis with nova-scheduler,
> your
> > comments are welcome.
> > ==
>  ...
> 
> > During the last Gantt meeting held Tuesday, we discussed about the
> > status and the problems we have. As we are close to Juno-2, there are
> > some concerns about which blueprints would be implemented by Juno, so
> > Gantt would be updated after. Due to the problems raised in the
> > different blueprints (please see the links there), it has been
> agreed to
> > follow a path a bit different from the one agreed at the Summit :
> once
> > B/ is merged, Gantt will be updated and work will happen in there
> while
> > work with C/ will happen in parallel. That means we need to backport
> in
> > Gantt all changes happening to the scheduler, but (and this is the
> most
> > important point) until C/ is merged into Gantt, Gantt won't support
> > filters which decide on aggregates or instance groups. In other
> words,
> > until C/ happens (but also A/), Gantt won't be feature-parity with
> > Nova-scheduler.
> >
> > That doesn't mean Gantt will move forward and leave all missing
> features
> > out of it, we will be dedicated to feature-parity as top priority but
> > that implies that the first releases of Gantt will be experimental
> and
> > considered for testing purposes only.
>  I don't think this sounds like the best approach.  It sounds like
> effort
>  will go into maintaining two schedulers instead of continuing to focus
>  effort on the refactoring necessary to decouple the scheduler from
> Nova.
>   It's heading straight for a "nova-network and Neutron" scenario,
> where
>  we're maintaining both for much longer than we want to.
> >>> Yeah, that's my immediate reaction too. I know it sounds like the Gantt
> >>> team are aiming todo the right thing by saying "feature-parity as the
> >>> top priority" but I'm concerned that this won't work out that way in
> >>> practice.
> >>>
>  I strongly prefer not starting a split until it's clear that the
> switch
>  to the new scheduler can be done as quickly as possible.  That means
>  that we should be able to start a deprecation and removal timer on
>  nova-scheduler.  Proceeding with a split now will only make it take
> even
>  longer to get there, IMO.
> 
>  This was the primary reason the last gantt split was scraped.  I don't
>  understand why we'd go at it again without finishing the job first.
> >>> Since Gantt is there primarily to serve Nova's needs, I don't see why
> >>> we need to rush into a split that won't actually be capable of serving
> >>> Nova needs, rather than waiting until the prerequisite work is ready.
> >>>
> >>> Regards,
> >>> Daniel
> >> Thanks Dan and Russell for the feedback. The main concern about the
> >> scheduler split is when
> >> it would be done, if Juno or later. The current changes I raised are
> >> waiting to be validated, and the main blueprint (isolate-scheduler-db)
> >> is not yet validated before July 10th (Spec Freeze) so there is risk
> >> that the efforts would be done on the K release (unless we get an
> >> exception here)
> >>
> >> -Sylvain
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] (no subject)

2014-07-07 Thread Sumit Gaur
http://bloggsatt.se/wp-admin/css/afternews.php___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Adam Young

On 07/07/2014 11:11 AM, Dolph Mathews wrote:


On Fri, Jul 4, 2014 at 5:13 PM, Adam Young > wrote:


Unscoped tokens are really a proxy for the Horizon session, so
lets treat them that way.


1.  When a user authenticates unscoped, they should get back a
list of their projects:

some thing along the lines of:

domains [{   name = d1,
 projects [ p1, p2, p3]},
   {   name = d2,
 projects [ p4, p5, p6]}]

Not the service catalog.  These are not in the token, only in the
response body.


Users can scope to either domains or projects, and we have two core 
calls to enumerate the available scopes:


  GET /v3/users/{user_id}/projects
  GET /v3/users/{user_id}/domains

There's also `/v3/role_assignments` and `/v3/OS-FEDERATION/projects`, 
but let's ignore those for the moment.


You're then proposing that the contents of these two calls be included 
in the token response, rather than requiring the client to make a 
discrete call - so this is just an optimization. What's the reasoning 
for pursuing this optimization?

It is a little more than just an optimization.

An unscoped token does not currently return a service catalog, and there 
really is no need for it to do so if it is only ever going to be used to 
talk to keystone.  Right now, Horizon cannot work with unscoped tokens, 
as you need a service catalog in order to fetch the projects list.



But this enumeration is going to have to be performed by Horizon every 
time a user initially logs in.   In addition, those calls would require 
custom policy on them, and part of the problem we have is that the 
policy needs to exactly match;  if a user can get an unscoped token, 
they need this information to be able to select what scope to match for 
a scoped token.








2.  Unscoped tokens are only initially via HTTPS and require
client certificate validation or Kerberos authentication from
Horizon. Unscoped tokens are only usable from the same origin as
they were originally requested.


That's just token binding in use? It sounds reasonable, but then seems 
to break down as soon as you make a call across an untrusted boundary 
from one service to another (and some deployments don't consider any 
two services to trust each other). When & where do you expect this to 
be enforced?


I expect this to be enforced from Keystone.  Specifically, I would say 
that Horizon would get a client certificate to be used whenever it was 
making calls to Keystone on behalf of a user.  The goal is to make 
people comfortable with the endless extension of sessions, by showing 
that it only can be done from a specific endpoint.


Client cert verification can be done in mod_ssl, or mod_nss, or in the 
ssl handling code in eventlet.


Kerberos would work for this as well, just didn't want to make that a 
hard requirement.


The same mechanism (client cert verification) could be used when Horizon 
talks to any of the other services, but that would be beyond the scope 
of this proposal.






3.  Unscoped tokens should be very short lived:  10 minutes.
Unscoped tokens should be infinitely extensible:   If I hand an
unscoped token to keystone, I get one good for another 10 minutes.


Is there no limit to this? With token binding, I don't think there 
needs to be... but I still want to ask.
Explicit revoke or 10 minute time out seem to be sufficient. However, if 
there is a lot of demand, we could make a max token refresh counter or 
time window, say 8 hours.





4.  Unscoped tokens are only accepted in Keystone.  They can only
be used to get a scoped token.  Only unscoped tokens can be used
to get another token.


"Unscoped tokens are only accepted in Keystone": +1, and that should 
be true today. But I'm not sure where you're taking the second half of 
this, as it conflicts with the assertion you made in #3: "If I hand an 
unscoped token to keystone, I get one good for another 10 minutes."


Good clarification; I wrote  that wrong.  unscoped tokens can only be 
used for


A)  Getting a scoped token
B)  Getting an unscoped token with an extended lifespan
C)  (potentially) Keystone specific operations that do not require RBAC.

(C) is not in the scope of this discussion and only included for 
completeness.




"Only unscoped tokens can be used to get another token." This also 
sounds reasonable, but I recall you looking into changing this 
behavior once, and found a use case for re-scoping scoped tokens that 
we couldn't break?


It was that use case that triggered this discussion;  Horizon uses one 
scoped token to get another scoped token.  If keystone makes the above 
mechanism the default, then Django-openstack-auth can adjust to work 
with the unscoped->scoped only rule.





Comments?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org


[openstack-dev] [neutron] Core reviewer assignment to specific BPs for Juno-2

2014-07-07 Thread Kyle Mestery
As we quickly approach Juno-2, I'd like to try something out for the
next few weeks. The tl;dr is this: What I'm going to propose is
assigning core reviewers to specific BPs which are of community
importance and are things we're working hard to land in Juno-2.

The long form answer is this: I'd like to assign core reviewers to
shepherd the following community features along [1]. The idea is to
have assigned cores responsible for reviewing and helping to merge
these BPs. This is not meant to prevent other core reviews from
reviewing, but rather to ensure core's have a specifically focused
review target for Juno-2 to assist in merging these once the code is
ready, and to work with submitters on these. Given how close the
Juno-2 timeline is, I think it's important to get core reviewers and
submitters in sync here so iterations can happen fast and we can merge
things when they're ready.

I'll add an item to discuss this at the team meeting today as well.
Hopefully this can help as we get closer to Juno-2.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan#Juno-2_BP_Assignments

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Matt Riedemann



On 7/2/2014 8:23 PM, Mike Bayer wrote:


I've just added a new section to this wiki, "MySQLdb + eventlet = sad",
summarizing some discussions I've had in the past couple of days about
the ongoing issue that MySQLdb and eventlet were not meant to be used
together.   This is a big one to solve as well (though I think it's
pretty easy to solve).

https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad



On 6/30/14, 12:56 PM, Mike Bayer wrote:

Hi all -

For those who don't know me, I'm Mike Bayer, creator/maintainer of
SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
I've become a full time Openstack developer working for Red Hat, given
the task of carrying Openstack's database integration story forward.
To that extent I am focused on the oslo.db project which going forward
will serve as the basis for database patterns used by other Openstack
applications.

I've summarized what I've learned from the community over the past month
in a wiki entry at:

https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy

The page also refers to an ORM performance proof of concept which you
can see at https://github.com/zzzeek/nova_poc.

The goal of this wiki page is to publish to the community what's come up
for me so far, to get additional information and comments, and finally
to help me narrow down the areas in which the community would most
benefit by my contributions.

I'd like to get a discussion going here, on the wiki, on IRC (where I am
on freenode with the nickname zzzeek) with the goal of solidifying the
blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
on as well as recruiting contributors to help in all those areas.  I
would welcome contributors on the SQLAlchemy / Alembic projects directly
as well, as we have many areas that are directly applicable to Openstack.

I'd like to thank Red Hat and the Openstack community for welcoming me
on board and I'm looking forward to digging in more deeply in the coming
months!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Regarding the eventlet + mysql sadness, I remembered this [1] in the 
nova.db.api code.


I'm not sure if that's just nova-specific right now, I'm a bit too lazy 
at the moment to check if it's in other projects, but I'm not seeing it 
in neutron, for example, and makes me wonder if it could help with the 
neutron db lock timeouts we see in the gate [2].  Don't let the bug 
status fool you, that thing is still showing up, or a variant of it is.


There are at least 6 lock-related neutron bugs hitting the gate [3].

[1] https://review.openstack.org/59760
[2] https://bugs.launchpad.net/neutron/+bug/1283522
[3] http://status.openstack.org/elastic-recheck/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican Meeting CANCELLED

2014-07-07 Thread Jarret Raim
All,

This week is Barbican's mid-cycle meet up. As many of our contributors are
in San Antonio this week, I'm going to cancel our weekly meeting today.

Several of us are still on IRC, so if you need something from us, feel
free to pop into #openstack-barbican and ask.

The etherpad being used for the meet up is here:

https://etherpad.openstack.org/p/barbican-juno-meetup






Thanks,
Jarret


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-07 Thread Anne Gentle
On Mon, Jul 7, 2014 at 2:43 PM, Steve Martinelli 
wrote:

> >>1) We already have identity-api, which will need to be updated once the
> spec is completed anyway.
>
> >So my thinking is to merge the content of openstack/identity-api into
> openstack/keystone-specs. We use identity-api just like we use
> keystone-specs anyway, but only for a subset of >our work.
>
> I think that would solve a lot of the issues I'm having with the current
> spec-process. I really don't want to have the same content being managed in
> two different places (even though the specs content probably won't be
> managed). Chalk it up to another discussion at the hackathon.
>

I like this idea too. Ideally we'd convince the rest of the projects to
treat theirs the same way. It seems like the replication of info is what
you want to avoid.

We also have end-users relying on this information for creating and working
on client tools and SDKs, though. I don't really want to publish end-user
documentation out of -specs repos. So do you think there is sufficient
information in the api-site repo for Identity API v3 for end users?

Thanks,
Anne


>
>
> Regards,
>
> *Steve Martinelli*
> Software Developer - Openstack
> Keystone Core Member
> --
>  *Phone:* 1-905-413-2851
> * E-mail:* *steve...@ca.ibm.com* 
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
>
>
>
>
>
> From:Dolph Mathews 
> To:"OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:07/07/2014 01:39 PM
> Subject:Re: [openstack-dev] [keystone][specs] listing the entire
> API in anew spec
> --
>
>
>
>
> On Fri, Jul 4, 2014 at 12:31 AM, Steve Martinelli <*steve...@ca.ibm.com*
> > wrote:
> To add to the growing pains of keystone-specs, one thing I've noticed is,
> there is inconsistency in the 'REST API Impact' section.
>
> To be clear here, I don't mean we shouldn't include what new APIs will be
> created, I think that is essential. But rather, remove the need to
> specifically spell out the request and response blocks.
>
> Personally, I find it redundant for a few reasons:
>
> Agree, we need to eliminate the redundancy...
>
>
>
> 1) We already have identity-api, which will need to be updated once the
> spec is completed anyway.
>
> So my thinking is to merge the content of openstack/identity-api into
> openstack/keystone-specs. We use identity-api just like we use
> keystone-specs anyway, but only for a subset of our work.
>
>
> 2) It's easy to get bogged down in the spec review as it is, I don't want
> to have to point out mistakes in the request/response blocks too (as I'll
> need to do that when reviewing the identity-api patch anyway).
>
> I personally see value in having them proposed as one patchset - it's all
> design work, so I think it should be approved as a cohesive piece of design.
>
>
> 3) Come time to propose the identity-api patch, there might be differences
> in what was proposed in the spec.
>
> There *shouldn't* be though... unless you're just talking about typos/etc.
> It's possible to design an unimplementable or unusable API though, and that
> can be discovered (at latest) by attempting an implementation... at that
> point, I think it's fair to go back and revise the spec/API with the
> solution.
>
>
>
> Personally I'd be OK with just stating the HTTP method and the endpoint.
> Thoughts?
>
> Not all API-impacting changes introduce new endpoint/method combinations,
> they may just add a new attribute to an existing resource - and this is
> still a bit redundant with the identity-api repo.
>
>
>
> Many apologies in advance for my pedantic-ness!
>
> Laziness*
>
> (lazy engineers are just more efficient)
>
>
> Regards,
>
> * Steve Martinelli*
> Software Developer - Openstack
> Keystone Core Member
> --
>  *Phone:* *1-905-413-2851* <1-905-413-2851>
> * E-mail:* *steve...@ca.ibm.com* 
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
>
>
>
> ___
> OpenStack-dev mailing list
> *OpenStack-dev@lists.openstack.org* 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-07 Thread Steve Martinelli
>>1) We already have identity-api,
which will need to be updated once the spec is completed anyway.

>So my thinking is to merge the content of openstack/identity-api
into openstack/keystone-specs. We use identity-api just like we use keystone-specs
anyway, but only for a subset of >our work.
 
I think that would solve a lot of the issues I'm having
with the current spec-process. I really don't want to have the same content
being managed in two different places (even though the specs content probably
won't be managed). Chalk it up to another discussion at the hackathon.


Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada




From:      
 Dolph Mathews 
To:      
 "OpenStack Development
Mailing List (not for usage questions)" ,

Date:      
 07/07/2014 01:39 PM
Subject:    
   Re: [openstack-dev]
[keystone][specs] listing the entire API in a        new
spec





On Fri, Jul 4, 2014 at 12:31 AM, Steve Martinelli 
wrote:
To add to the growing pains of keystone-specs,
one thing I've noticed is, there is inconsistency in the 'REST API Impact'
section. 

To be clear here, I don't mean we shouldn't include what new APIs will
be created, I think that is essential. But rather, remove the need to specifically
spell out the request and response blocks. 

Personally, I find it redundant for a few reasons:

Agree, we need to eliminate the redundancy...
 


1) We already have identity-api, which will need to be updated once the
spec is completed anyway.

So my thinking is to merge the content of openstack/identity-api
into openstack/keystone-specs. We use identity-api just like we use keystone-specs
anyway, but only for a subset of our work.
 

2) It's easy to get bogged down in the spec review as it is, I don't want
to have to point out mistakes in the request/response blocks too (as I'll
need to do that when reviewing the identity-api patch anyway).

I personally see value in having them proposed as one
patchset - it's all design work, so I think it should be approved as a
cohesive piece of design.
 

3) Come time to propose the identity-api patch, there might be differences
in what was proposed in the spec.

There *shouldn't* be though... unless you're just talking
about typos/etc. It's possible to design an unimplementable or unusable
API though, and that can be discovered (at latest) by attempting an implementation...
at that point, I think it's fair to go back and revise the spec/API with
the solution.
 


Personally I'd be OK with just stating the HTTP method and the endpoint.
Thoughts?

Not all API-impacting changes introduce new endpoint/method
combinations, they may just add a new attribute to an existing resource
- and this is still a bit redundant with the identity-api repo.



Many apologies in advance for my pedantic-ness!

Laziness*

(lazy engineers are just more efficient)


Regards, 

Steve Martinelli
Software Developer - Openstack
Keystone Core Member 





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com


8200 Warden Ave
Markham, ON L6G 1C7
Canada


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Chris Friesen

On 07/07/2014 12:35 PM, Day, Phil wrote:

Hi Folks,

I noticed a couple of changes that have just merged to allow the server
group hints to be specified by name (some legacy behavior around
automatically creating groups).

https://review.openstack.org/#/c/83589/

https://review.openstack.org/#/c/86582/

But group names aren’t constrained to be unique, and the method called
to get the group instance_group_obj.InstanceGroup.get_by_name() will
just return the first group I finds with that name (which could be
either the legacy group or some new group, in which case the behavior is
going to be different from the legacy behavior I think ?

I’m thinking that there may need to be some additional logic here, so
that group hints passed by name will fail if there is an existing group
with a policy that isn’t “legacy” – and equally perhaps group creation
needs to fail if a legacy groups exists with the same name ?


Sorry, forgot to put this in my previous message.  I've been advocating 
the ability to use names instead of UUIDs for server groups pretty much 
since I saw them last year.


I'd like to just enforce that server group names must be unique within a 
tenant, and then allow names to be used anywhere we currently have UUIDs 
(the way we currently do for instances).  If there is ambiguity (like 
from admin doing an operation where there are multiple groups with the 
same name in different tenants) then we can have it fail with an 
appropriate error message.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Chris Friesen

On 07/07/2014 12:35 PM, Day, Phil wrote:

Hi Folks,

I noticed a couple of changes that have just merged to allow the server
group hints to be specified by name (some legacy behavior around
automatically creating groups).

https://review.openstack.org/#/c/83589/

https://review.openstack.org/#/c/86582/

But group names aren’t constrained to be unique, and the method called
to get the group instance_group_obj.InstanceGroup.get_by_name() will
just return the first group I finds with that name (which could be
either the legacy group or some new group, in which case the behavior is
going to be different from the legacy behavior I think ?

I’m thinking that there may need to be some additional logic here, so
that group hints passed by name will fail if there is an existing group
with a policy that isn’t “legacy” – and equally perhaps group creation
needs to fail if a legacy groups exists with the same name ?

Thoughts ?


What about constraining the group names to be unique?  (At least within 
a given tenant.)


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-07 Thread Eugene Nikanorov
Hi folks,

I will try to respond both recent emails:

> What I absolutely don’t want is users getting Bronze load balancers and
using TLS and L7 on them.
My main objection of having extension list on the flavor is that it is
actually doesn't allow you to do what you want to do.
Flavor is the entity that is used when user creates service instance, like
loadbalancer, firewall or vpnservice objects.
The extensions you are talking about provide access to REST resources which
may not be directly attached to an instance.
Which means that user may create those object without bothering with
flavors at all. You can't turn off access to those REST resources, because
user doesn't need to use flavors to access them.

The second objection is more minor - this is a different problem then we
are trying to solve right now.
I suggested to postpone it until we have clearer vision of how it is going
to work.

> My understanding of the flavor framework was that by specifying (or not
specifying) extensions I can create a diverse > offering meeting my
business needs.
Well, that's actually is not difficult: we have metadata in a service
profile, admin can turn extensions on and off there.
As I said before, extension in the flavor is too coarse-grained to specify
supported API aspects, secondly, it can't be used to actually turn
extensions on or off.

> The way you are describing it the user selects, say a bronze flavor, and
the system might or might not put it on a load
> balancer with TLS.
In first implementation this would be the responsibility of the description
to provide such information, and the responsibility of a admin to provide
proper mapping between flavor and service profile.

> in your example, say if I don’t have any TLS capable load balancers left
and the user requests them
How user can request such load balancer if he/she doesn't see appropriate
flavor?
I'm just telling that if extension list on the flavor doesn't solve the
problems it supposed to solve - it's no better than providing such
information in the description.

To Mark's comments:
> The driver should not be involved.  We can use the supported extensions
to determine is associated logical resources are supported.

In example above - user may only know about certain limitations when
accessing core API, which you can't turn off.
Say, create a listener with certificate_id (or whatever object is
responsible for keeping a certificate).
In other words: in order to perform any kind of dispatching that will
actually turn off access to TLS (in the core API) we will need to implement
some complex dispatching which consider not only REST resources of the
extension, but also attributes of the core API used in the request.
I think that's completely unnecessary.

>  Otherwise driver behaviors will vary wildly
I don't see why it should. Once admin provided proper mapping between
flavor and service profile (where, as I suggested above, you may turn
on/off the extensions with metadata) driver should behave according to the
flavor.
It's then up to our implementation on what to return to user in case it
tries to access the extension unsupported in a given mode.
But it still will work at the point of association (cert with listener, l7
policy with listener, etc)

Another point is that you look at the extension list more closely - you'll
see that it's no better then tags, and that's the reason to move that to
service profile's metadata.
I don't think dispatching should be done on the basis of what is defined on
the flavor - it is a complex solution giving no benefits over existing
dispatching method.


Thanks,
Eugene.



On Mon, Jul 7, 2014 at 8:41 PM, Mark McClain  wrote:

>
>  On Jul 4, 2014, at 1:09 AM, Eugene Nikanorov 
> wrote:
>
>  German,
>
>  First of all extension list looks lbaas-centric right now.
>
>
>  Actually far from it.  SSL VPN should be service extension.
>
>
>
>  Secondly, TLS and L7 are such APIs which objects should not require
> loadbalancer or flavor to be created (like pool or healthmonitor that are
> pure db objects).
> Only when you associate those objects with loadbalancer (or its child
> objects), driver may tell if it supports them.
>  Which means that you can't really turn those on or off, it's a generic
> API.
>
>
>  The driver should not be involved.  We can use the supported extensions
> to determine is associated logical resources are supported.  Otherwise
> driver behaviors will vary wildly.  Also deferring to driver exposes a
> possible way for a tenant to utilize features that may not be supported by
> the operator curated flavor.
>
>   From user perspective flavor description (as interim) is sufficient to
> show what is supported by drivers behind the flavor.
>
>
>  Supported extensions are critical component for this.
>
>
>  Also, I think that turning "extensions" on/off is a bit of side problem
> to a service specification, so let's resolve it separately.
>
>
>  Thanks,
> Eugene.
>
>
> On Fri, Jul 4, 2014 at 3:07 AM, Eichberger, Germ

Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Scott Moser
On Mon, 7 Jul 2014, Sean Dague wrote:

>
> Right, but that assumes router control.
>
> > In general, anyone doing singlestack v6 at the moment relies on
> > config-drive to make it work.  This works fine but it depends what
> > cloud-init support your application has.
>
> I think it's also important to realize that the metadata service isn't
> OpenStack invented, it's an AWS API. Which means I don't think we really

Thats incorrect.  The metadata service that lives at
  http://169.254.169.254/
   and
  http://169.254.169.254/ec2
is a mostly-aws-compatible metadata service.

The metadata service that lives at
   http://169.254.169.254/openstack
is 100% "Openstack Invented".

> have the liberty to go changing how it works, especially with something
> like IPv6 support.
>
> I'm not sure I understand why requiring config-drive isn't ok. In our
> upstream testing it's a ton more reliable than the metadata service due
> to all the crazy networking things it's doing.

Because config-drive is "initialization only".  Block devices are not a 2
way communication mechanism.

The obvious immediate need for something more than "init only" is hotplug
of a network device.  In amazon, this actuall works.
  * The device is hot-plug added
  * udev rules are available that then hit the metadata service
to find out what the network configuration should be for that newly
added nic.
  * the udev rules bring up the interface.

To the end user, they made an api call that said "attach this network
interface with this IP" and it just magically happened.  In openstack at
the moment, they have to add the nic, and then ssh in and configure the
newly added nic (or some other mechanism).

See bug 1153626 (http://pad.lv/1153626) for more info on how it works on
Amazon.

Amazon also has other neat things in the metadata service such
time-limited per-instance credentials that can be used by the instance to
do things that the user provides an IAM role for.

More info on the AWS metadata service is at
 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html


We should do neat things like this in sane ways in the Openstack Metadata
service.  And that openstack metadata service should be available via
ipv6.

>
> I'd honestly love to see us just deprecate the metadata server.

If I had to deprecate one or the other, I'd deprecate config drive.  I do
realize that its simplicity is favorable, but not if it is insufficient.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Scott Moser
On Mon, 7 Jul 2014, CARVER, PAUL wrote:

>
> Andrew Mann wrote:
> >What's the use case for an IPv6 endpoint? This service is just for instance 
> >metadata,
> >so as long as a requirement to support IPv4 is in place, using solely an 
> >IPv4 endpoint
> >avoids a number of complexities:
>
> The obvious use case would be deprecation of IPv4, but the question is when. 
> Should I
> expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for 
> all VMs?
> What about the year 2020 or 2050 or 2100? Do we ever reach a point where we 
> can turn
> off IPv4 or will we need IPv4 for eternity?
>
> Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
> to support
> IPv6 as a datasource. I’m going by this documentation
> http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
> where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
> mechanisms.
>
> It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address 
> as long as
> most tenants are likely to be using a version of cloud-init that doesn’t know 
> about IPv6
> so step one would be to find out whether the maintainer of cloud-init is open 
> to the
> idea of IPv4-less clouds.

Most certainly, patches that are needed to cloud-init to support
functioning in a IPv4-less cloud are welcome.

>From an Ubuntu perspective, as long as the changes are safe from breaking
things, we'd also probably be able to get them into the official ubuntu
14.04 cloud images.

> If so, then picking a link local IPv6 address seems like the obvious thing to 
> do and the
> update to Neutron should be pretty trivial. There are a few references to that
> “magic ip”
> https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
> but the main one is the iptables redirect rule in the L3 agent:
> https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/07/14 08:28, Mark McLoughlin wrote:
> On Mon, 2014-07-07 at 18:11 +, Angus Salkeld wrote:
>> On 03/07/14 05:30, Mark McLoughlin wrote:
>>> Hey
>>>
>>> This is an attempt to summarize a really useful discussion that Victor,
>>> Flavio and I have been having today. At the bottom are some background
>>> links - basically what I have open in my browser right now thinking
>>> through all of this.
>>>
>>> We're attempting to take baby-steps towards moving completely from
>>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>>> first victim.
>>
>> Has this been widely agreed on? It seems to me like we are mixing two
>> issues:
>> 1) we need to move to py3
>> 2) some people want to move from eventlet (I am not convinced that the
>>volume of code changes warrants the end goal - and review load)
>>
>> To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
>> on top of asyncio? - i.e. not require widespread code changes.
>>
>> So we can maintain the main loop API but move to py3. I am not sure on
>> the feasibility, but seems to me like a more contained change.
> 
> Right - it's important that we see these orthogonal questions,
> particularly now that it appears eventlet is likely to be available for
> Python 3 soon.

Awesome (I didn't know that), how about we just use that?
Relax and enjoy py3:-)

Can we afford the code churn that the move to asyncio requires?
In terms of:
1) introduced bugs from the large code changes
2) redirected developers (that could be solving more pressing issues)
3) the problem of not been able to easily backport patches to stable
   (the code has diverged)
4) retraining of OpenStack developers/reviews to understand the new
   event loop. (eventlet has warts, but a lot of devs know about them).

> 
> For example, if it was generally agreed that we all want to end up on
> Python 3 with asyncio in the long term, you could imagine deploying

I am questioning whether we should be using asyncio directly (yield).
instead we keep using eventlet (the new one with py3 support) and it
runs the appropriate main loop depending on py2/3.

I don't want to derail this effort, I just want to suggest what I see
as an obvious alternative that requires a fraction of the work (or none).

The question is: "is the effort worth the outcome"?

Once we are in "asyncio heaven", would we look back and say "it
would have been more valuable to focus on X", where X could have
been say ease-of-upgrades or general-stability?


- -Angus

> (picking random examples) Glance with Python 3 and eventlet, but
> Ceilometer with Python 2 and asyncio/trollius.
> 
> However, I don't have a good handle on how your suggestion of switching
> to the asyncio event loop without widespread code changes would work?
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTuvHOAAoJEFrDYBLxZjWobwAH/R6ggRhf7DifYyhdQLQWsDxi
s6moyeqdbjzt977Ula2J1hmP/6MI7icb5WmdI7DFlqcyl2eS+N9a51SFhdYC81Pz
SLJsrV4vUhrFXHGKgzWhFu1PMsE7oEIp+Z1vu/eCx1WiHaT1o615JHckpau9k9w8
7yhdAx1RfBM6UHR7LOOrqFzZvL7TYxNUhE9XTRMcwX2/iSzDFf8thyTyR+ln7iXo
t271Mk+3na/SgGpH42rmvuvWFh8jdaeAogFma+JNPkVgHwu28zXutMpxEfLpXdzn
9Ag7LphZnKh7y2r3+Yzc0KAp7ShmlMmJbhnITzp2w3myRAdF/6yA561ipikGalQ=
=te4t
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Joshua Harlow
Jump on #cloud-init on freenode, smoser and I (and the other folks there) are 
both pretty friendly ;)

From: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Date: Monday, July 7, 2014 at 12:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
"CARVER, PAUL" mailto:pc2...@att.com>>
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Joshua Harlow
Just a update; since I'm the one who recently did a lot of the openstack 
adjustments in cloud-init.

So this one line is part of the ipv4 requirement:

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L30

It though can be overriden either by user-data or by static-configuration data 
that resides inside the image.

This is the line of code that does this:

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L77

This means that the image builder can set cloud-init config list 
'metadata_urls' to be a ipv6 format (if they want).

The underlying requests library (as long as it supports ipv6) should be happy 
using this (hopefully there aren't any other bugs that hinder its usage).

Btw, for the curios, that datasource inherits from the same base class as 
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceConfigDrive.py
 (a mixin is used) so the config drive code and the openstack metadata reading 
code actually use the same base code (which was a change that I did that I 
thought was neat)  and only change how they read the data (either from urls or 
from a filesystem).

From: , PAUL mailto:pc2...@att.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, July 7, 2014 at 11:26 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support


Andrew Mann wrote:
>What's the use case for an IPv6 endpoint? This service is just for instance 
>metadata,
>so as long as a requirement to support IPv4 is in place, using solely an IPv4 
>endpoint
>avoids a number of complexities:

The obvious use case would be deprecation of IPv4, but the question is when. 
Should I
expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for all 
VMs?
What about the year 2020 or 2050 or 2100? Do we ever reach a point where we can 
turn
off IPv4 or will we need IPv4 for eternity?

Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
to support
IPv6 as a datasource. I’m going by this documentation
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
mechanisms.

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

If so, then picking a link local IPv6 address seems like the obvious thing to 
do and the
update to Neutron should be pretty trivial. There are a few references to that
“magic ip”
https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
but the main one is the iptables redirect rule in the L3 agent:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Ian Wells
On 7 July 2014 11:37, Sean Dague  wrote:

> > When it's on a router, it's simpler: use the nexthop, get that metadata
> > server.
>
> Right, but that assumes router control.
>

It does, but then that's the current status quo - these things go on
Neutron routers (and, by extension, are generally not available via
provider networks).

 > In general, anyone doing singlestack v6 at the moment relies on
> > config-drive to make it work.  This works fine but it depends what
> > cloud-init support your application has.
>
> I think it's also important to realize that the metadata service isn't
> OpenStack invented, it's an AWS API. Which means I don't think we really
> have the liberty to go changing how it works, especially with something
> like IPv6 support.
>

Well, as Amazon doesn't support ipv6 we are the trailblazers here and we
can do what we please.  If you have a singlestack v6 instance there's no
compatibility to be maintained with Amazon, because it simply won't work on
Amazon.  (Also, the format of the metadata server maintains compatibility
with AWS but I don't think it's strictly AWS any more; the config drive
certainly isn't.)


> I'm not sure I understand why requiring config-drive isn't ok. In our
> upstream testing it's a ton more reliable than the metadata service due
> to all the crazy networking things it's doing.
>
> I'd honestly love to see us just deprecate the metadata server.


The metadata server could potentially have more uses in the future - it's
possible to get messages out of it, rather than just one time config - but
yes, the config drive is so much more sensible.  For the server, and once
you're into Neutron, then you end up with many problems - which interface
to use, how to get your network config when important details are probably
on the metadata server itself...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Marconi] Heat and concurrent signal processing needs some deep thought

2014-07-07 Thread Clint Byrum
I just noticed this review:

https://review.openstack.org/#/c/90325/

And gave it some real thought. This will likely break any large scale
usage of signals, and I think breaks the user expectations. Nobody expects
to get a failure for a signal. It is one of those things that you fire and
forget. "I'm done, deal with it." If we start returning errors, or 409's
or 503's, I don't think users are writing their in-instance initialization
tooling to retry. I think we need to accept it and reliably deliver it.

Does anybody have any good ideas for how to go forward with this? I'd
much rather borrow a solution from some other project than try to invent
something for Heat.

I've added Marconi as I suspect there has already been some thought put
into how a user-facing set of tools would send messages.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Russell Bryant
On 07/07/2014 02:35 PM, Day, Phil wrote:
> Hi Folks,
> 
>  
> 
> I noticed a couple of changes that have just merged to allow the server
> group hints to be specified by name (some legacy behavior around
> automatically creating groups).
> 
>  
> 
> https://review.openstack.org/#/c/83589/
> 
> https://review.openstack.org/#/c/86582/
> 
>  
> 
> But group names aren’t constrained to be unique, and the method called
> to get the group instance_group_obj.InstanceGroup.get_by_name() will
> just return the first group I finds with that name (which could be
> either the legacy group or some new group, in which case the behavior is
> going to be different from the legacy behavior I think ?
> 
>  
> 
> I’m thinking that there may need to be some additional logic here, so
> that group hints passed by name will fail if there is an existing group
> with a policy that isn’t “legacy” – and equally perhaps group creation
> needs to fail if a legacy groups exists with the same name ?

Sounds like a reasonable set of improvements to me.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Sean Dague
On 07/07/2014 02:31 PM, Ian Wells wrote:
> On 7 July 2014 10:43, Andrew Mann  > wrote:
> 
> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in
> place, using solely an IPv4 endpoint avoids a number of complexities:
> 
> - Which one to try first?
> 
> 
> http://en.wikipedia.org/wiki/Happy_Eyeballs
> 
> - Which one is authoritative?
> 
> 
> If they return the same data, both are (the same as a dualstack website
> of any form).
>  
> 
> - Are both required to be present? I.e. can an instance really not
> have any IPv4 support and expect to work?
> 
> 
> Absolutely yes. "Here, have an address from a space with millions of
> addresses, but you won't work unless you can also find one from this
> space with an address shortage"...  Yes, since we can happily use
> overlapping ranges there are many nits you can pick with that statement,
> but still.  We're trying to plan for the future here and I absolutely
> think we should expect singlestack v6 to work.
>  
> 
> - I'd presume the IPv6 endpoint would have to be link-local scope?
> Would that mean that each subnet would need a compute metadata endpoint?
> 
> 
> Well, the v4 address certainly requires a router (even if the address is
> nominally link local), so I don't think it's the end of the world if the
> v6 was the same - though granted it would be nice to improve upon that. 
> In fact, at the moment every router has its own endpoint.  We could, for
> the minute, do the same with v6 and use the v4-mapped address
> |:::169.254.169.254|.
> 
> An alternative would be to use a well known link local address, but
> there's no easy way to reserve such a thing officially (though, in
> practice, we restrict link locals based on EUID64 and don't let people
> change that, so it would only be provider networks with any sort of
> issue).  Something along the lines of fe80::a9fe:a9fe would probably
> suit.  You may run into problems with that if you have two clouds linked
> to the same provider network; this is a problem if you can't disable the
> metadata server on a network, because they will fight over the address. 
> When it's on a router, it's simpler: use the nexthop, get that metadata
> server.

Right, but that assumes router control.

> In general, anyone doing singlestack v6 at the moment relies on
> config-drive to make it work.  This works fine but it depends what
> cloud-init support your application has.

I think it's also important to realize that the metadata service isn't
OpenStack invented, it's an AWS API. Which means I don't think we really
have the liberty to go changing how it works, especially with something
like IPv6 support.

I'm not sure I understand why requiring config-drive isn't ok. In our
upstream testing it's a ton more reliable than the metadata service due
to all the crazy networking things it's doing.

I'd honestly love to see us just deprecate the metadata server.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Server groups specified by name

2014-07-07 Thread Day, Phil
Hi Folks,

I noticed a couple of changes that have just merged to allow the server group 
hints to be specified by name (some legacy behavior around automatically 
creating groups).

https://review.openstack.org/#/c/83589/
https://review.openstack.org/#/c/86582/

But group names aren't constrained to be unique, and the method called to get 
the group instance_group_obj.InstanceGroup.get_by_name() will just return the 
first group I finds with that name (which could be either the legacy group or 
some new group, in which case the behavior is going to be different from the 
legacy behavior I think ?

I'm thinking that there may need to be some additional logic here, so that 
group hints passed by name will fail if there is an existing group with a 
policy that isn't "legacy" - and equally perhaps group creation needs to fail 
if a legacy groups exists with the same name ?

Thoughts ?

(Sorry I missed this on the reviews)
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-07 Thread Sumit Naiksatam
To level set, the FWaaS model was (intentionally) made agnostic of
whether the firewall was being subject to the E-W or N-S traffic (or
both). The possibility of having to use a different
strategy/implementation to handle the two sets of traffic differently,
is an artifact of the backend implementation (and DVR in this case). I
am not sure that the FWaaS user needs to be aware of this distinction.
Admittedly, this makes the implementation of FWaaS harder on the DVR
reference implementation.

This incompatibility issue between FWaaS and DVR was raised several
times in the past, but unfortunately we don't have a clean technical
solution yet. I am suspecting that this issue will manifest for any
service (NAT/VPNaaS?) that was leveraging the connection tracking
feature of iptables in the past.

The FWaaS team has also been trying to devise a solution for this
(this is a standing item on our weekly IRC meetings), but we would
need more help from the DVR team on this (I believe that was the
original plan in talking to Swami/Vivek/team).

Would it be possible for the relevant folks from the DVR team to
attend the FWaaS meeting on Wednesday [1] to facilitate a dedicated
discussion on this topic? That way it might be possible to get more
input from the FWaaS team on this.

Thanks,
~Sumit.

[1] https://wiki.openstack.org/wiki/Meetings/FWaaS


On Fri, Jul 4, 2014 at 12:23 AM, Narasimhan, Vivekanandan
 wrote:
> Hi Yi,
>
>
>
> Swami will be available from this week.
>
>
>
> Will it be possible for you to join the regular DVR Meeting (Wed 8AM PST)
> next week and we can slot that to discuss this.
>
>
>
> I see that FwaaS is of much value for E/W traffic (which has challenges),
> but for me it looks easier to implement the same in N/S with the
>
> current DVR architecture, but there might be less takers on that.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> From: Yi Sun [mailto:beyo...@gmail.com]
> Sent: Thursday, July 03, 2014 11:50 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] DVR and FWaaS integration
>
>
>
> The NS FW will be on a centralized node for sure. For the DVR + FWaaS
> solution is really for EW traffic. If you are interested on the topic,
> please propose your preferred meeting time and join the meeting so that we
> can discuss about it.
>
> Yi
>
> On 7/2/14, 7:05 PM, joehuang wrote:
>
> Hello,
>
>
>
> It’s hard to integrate DVR and FWaaS. My proposal is to split the FWaaS into
> two parts: one part is for east-west FWaaS, this part could be done on DVR
> side, and make it become distributed manner. The other part is for
> north-south part, this part could be done on Network Node side, that means
> work in central manner. After the split, north-south FWaaS could be
> implemented by software or hardware, meanwhile, east-west FWaaS is better to
> implemented by software with its distribution nature.
>
>
>
> Chaoyi Huang ( Joe Huang )
>
> OpenStack Solution Architect
>
> IT Product Line
>
> Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: joehu...@huawei.com
>
> Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, P.R.China
>
>
>
> 发件人: Yi Sun [mailto:beyo...@gmail.com]
> 发送时间: 2014年7月3日 4:42
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 抄送: Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
> 主题: Re: [openstack-dev] DVR and FWaaS integration
>
>
>
> All,
>
> After talk to Carl and FWaaS team , Both sides suggested to call a meeting
> to discuss about this topic in deeper detail. I heard that Swami is
> traveling this week. So I guess the earliest time we can have a meeting is
> sometime next week. I will be out of town on monday, so any day after Monday
> should work for me. We can do either IRC, google hang out, GMT or even a
> face to face.
>
> For anyone interested, please propose your preferred time.
>
> Thanks
>
> Yi
>
>
>
> On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin  wrote:
>
> In line...
>
> On Jun 25, 2014 2:02 PM, "Yi Sun"  wrote:
>>
>> All,
>> During last summit, we were talking about the integration issues between
>> DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. But
>> after that meeting I was tight up with my work and did not get time to
>> continue to follow up the issue. To not slow down the discussion, I'm
>> forwarding out the email that I sent out as the follow up to the IRC meeting
>> here, so that whoever may be interested on the topic can continue to discuss
>> about it.
>>
>> First some background about the issue:
>> In the normal case, FW and router are running together inside the same box
>> so that FW can get route and NAT information from the router component. And
>> in order to have FW to function correctly, FW needs to see the both
>> directions of the traffic.
>> DVR is designed in an asymmetric way that each DVR only sees one leg of
>> the traffic. If we build FW on top of DVR, then FW functionality will be
>> broken. We need to find a good method to have FW to

Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread CARVER, PAUL

Andrew Mann wrote:
>What's the use case for an IPv6 endpoint? This service is just for instance 
>metadata,
>so as long as a requirement to support IPv4 is in place, using solely an IPv4 
>endpoint
>avoids a number of complexities:

The obvious use case would be deprecation of IPv4, but the question is when. 
Should I
expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for all 
VMs?
What about the year 2020 or 2050 or 2100? Do we ever reach a point where we can 
turn
off IPv4 or will we need IPv4 for eternity?

Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
to support
IPv6 as a datasource. I’m going by this documentation
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
mechanisms.

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

If so, then picking a link local IPv6 address seems like the obvious thing to 
do and the
update to Neutron should be pretty trivial. There are a few references to that
“magic ip”
https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
but the main one is the iptables redirect rule in the L3 agent:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Joshua Harlow
So just to clear this up, my understanding is that asyncio and replacing
PRC calls with taskflow's job concept are two very different things. The
asyncio change would be retaining the RPC layer while the job concept[1]
would be something entirely different. I'm not a ceilometer expert though
so my understanding might be incorrect.

Overall the taskflow job mechanism is a lot like RQ[2] in concept which is
an abstraction around jobs, and doesn't mandate RPC or redis, or zookeeper
or ... as a job is performed. My biased
not-so-knowledgeable-about-ceilometer opinion is that a job mechanism
suits ceilometer more than a RPC one does (and since a job processing
mechanism is higher level abstraction it hopefully is more flexible with
regards to asyncio or other...).

[1] http://docs.openstack.org/developer/taskflow/jobs.html
[2] http://python-rq.org/

-Original Message-
From: Eoghan Glynn 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Sunday, July 6, 2014 at 6:28 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

>
>
>> This is an attempt to summarize a really useful discussion that Victor,
>> Flavio and I have been having today. At the bottom are some background
>> links - basically what I have open in my browser right now thinking
>> through all of this.
>
>Thanks for the detailed summary, it puts a more flesh on the bones
>than a brief conversation on the fringes of the Paris mid-cycle.
>
>Just a few clarifications and suggestions inline to add into the
>mix.
>
>> We're attempting to take baby-steps towards moving completely from
>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>> first victim.
>
>First beneficiary, I hope :)
> 
>> Ceilometer's code is run in response to various I/O events like REST API
>> requests, RPC calls, notifications received, etc. We eventually want the
>> asyncio event loop to be what schedules Ceilometer's code in response to
>> these events. Right now, it is eventlet doing that.
>
>Yes.
>
>And there is one other class of stimulus, also related to eventlet,
>that is very important for triggering the execution of ceilometer
>logic. That would be the timed tasks that drive polling of:
>
> * REST APIs provided by other openstack services
> * the local hypervisor running on each compute node
> * the SNMP daemons running at host-level etc.
>
>and also trigger periodic alarm evaluation.
>
>IIUC these tasks are all mediated via the oslo threadgroup's
>usage of eventlet.greenpool[1]. Would this logic also be replaced
>as part of this effort?
>
>> Now, because we're using eventlet, the code that is run in response to
>> these events looks like synchronous code that makes a bunch of
>> synchronous calls. For example, the code might do some_sync_op() and
>> that will cause a context switch to a different greenthread (within the
>> same native thread) where we might handle another I/O event (like a REST
>> API request)
>
>Just to make the point that most of the agents in the ceilometer
>zoo tend to react to just a single type of stimulus, as opposed
>to a mix of dispatching from both message bus and the REST API.
>
>So to classify, we'd have:
>
> * compute-agent: timer tasks for polling
> * central-agent: timer tasks for polling
> * notification-agent: dispatch of "external" notifications from
>   the message bus
> * collector: dispatch of "internal" metering messages from the
>   message bus
> * api-service: dispatch of REST API calls
> * alarm-evaluator: timer tasks for alarm evaluation
> * alarm-notifier: dispatch of "internal" alarm notifications
>
>IIRC, the only case where there's a significant mix of trigger
>styles is the partitioned alarm evaluator, where assignments of
>alarm subsets for evaluation is driven over RPC, whereas the
>actual thresholding is triggered by a timer.
>
>> Porting from eventlet's implicit async approach to asyncio's explicit
>> async API will be seriously time consuming and we need to be able to do
>> it piece-by-piece.
>
>Yes, I agree, a step-wise approach is the key here.
>
>So I'd love to have some sense of the time horizon for this
>effort. It clearly feels like a multi-cycle effort, so the main
>question in my mind right now is whether we should be targeting
>the first deliverables for juno-3?
>
>That would provide a proof-point in advance of the K* summit,
>where I presume the task would be get wider buy-in for the idea.
>
>If it makes sense to go ahead and aim the first baby steps for
>juno-3, then we'd need to have a ceilometer-spec detailing these
>changes. This would need to be proposed by say EoW and then
>landed before the spec acceptance deadline for juno (~July 21st).
>
>We could use this spec proposal to dig into the perceived benefits
>of this effort:
>
> * the obvious win around getting rid of the eventlet black-magic
> * plus possibly other benefits such as code clarity and ease of
>   maintenanc

Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Ian Wells
On 7 July 2014 10:43, Andrew Mann  wrote:

> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in place,
> using solely an IPv4 endpoint avoids a number of complexities:
>
- Which one to try first?
>

http://en.wikipedia.org/wiki/Happy_Eyeballs

- Which one is authoritative?
>

If they return the same data, both are (the same as a dualstack website of
any form).


> - Are both required to be present? I.e. can an instance really not have
> any IPv4 support and expect to work?
>

Absolutely yes. "Here, have an address from a space with millions of
addresses, but you won't work unless you can also find one from this space
with an address shortage"...  Yes, since we can happily use overlapping
ranges there are many nits you can pick with that statement, but still.
We're trying to plan for the future here and I absolutely think we should
expect singlestack v6 to work.


> - I'd presume the IPv6 endpoint would have to be link-local scope? Would
> that mean that each subnet would need a compute metadata endpoint?
>

Well, the v4 address certainly requires a router (even if the address is
nominally link local), so I don't think it's the end of the world if the v6
was the same - though granted it would be nice to improve upon that.  In
fact, at the moment every router has its own endpoint.  We could, for the
minute, do the same with v6 and use the v4-mapped address
:::169.254.169.254.

An alternative would be to use a well known link local address, but there's
no easy way to reserve such a thing officially (though, in practice, we
restrict link locals based on EUID64 and don't let people change that, so
it would only be provider networks with any sort of issue).  Something
along the lines of fe80::a9fe:a9fe would probably suit.  You may run into
problems with that if you have two clouds linked to the same provider
network; this is a problem if you can't disable the metadata server on a
network, because they will fight over the address.  When it's on a router,
it's simpler: use the nexthop, get that metadata server.

In general, anyone doing singlestack v6 at the moment relies on
config-drive to make it work.  This works fine but it depends what
cloud-init support your application has.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Mon, 2014-07-07 at 18:11 +, Angus Salkeld wrote:
> On 03/07/14 05:30, Mark McLoughlin wrote:
> > Hey
> > 
> > This is an attempt to summarize a really useful discussion that Victor,
> > Flavio and I have been having today. At the bottom are some background
> > links - basically what I have open in my browser right now thinking
> > through all of this.
> > 
> > We're attempting to take baby-steps towards moving completely from
> > eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> > first victim.
> 
> Has this been widely agreed on? It seems to me like we are mixing two
> issues:
> 1) we need to move to py3
> 2) some people want to move from eventlet (I am not convinced that the
>volume of code changes warrants the end goal - and review load)
> 
> To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
> on top of asyncio? - i.e. not require widespread code changes.
> 
> So we can maintain the main loop API but move to py3. I am not sure on
> the feasibility, but seems to me like a more contained change.

Right - it's important that we see these orthogonal questions,
particularly now that it appears eventlet is likely to be available for
Python 3 soon.

For example, if it was generally agreed that we all want to end up on
Python 3 with asyncio in the long term, you could imagine deploying
(picking random examples) Glance with Python 3 and eventlet, but
Ceilometer with Python 2 and asyncio/trollius.

However, I don't have a good handle on how your suggestion of switching
to the asyncio event loop without widespread code changes would work?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] Mid-cycle meetup July 2014

2014-07-07 Thread Hayes, Graham
Hi,

Apologies for the short notice on this, but it took us a while to finalise the 
details!

We are planing on having our mid-cycle meet for Juno in the HP Seattle office 
from the 28th to the 29th of July.

Details for the meet up are here: 
https://wiki.openstack.org/wiki/Designate/MidCycleJuly2014 and as things change 
I will be updating that page.

If you are interested in attending (in person, or remotely) please email  me 
(graham.ha...@hp.com) with subject "Designate July 
2014 Mid Cycle"

There is limited spaces, for both in person, and remote attendees, so this may 
fill up fast!

Thanks,

Graham

--
Graham Hayes
Software Engineer
DNS as a Service
HP Helion Cloud - Platform Services

GPG Key: 7D28E972

graham.ha...@hp.com
M +353 87 377 8315
P +353 1 524 2175

[HP]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Chris Behrens

On Jul 7, 2014, at 11:11 AM, Angus Salkeld  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 03/07/14 05:30, Mark McLoughlin wrote:
>> Hey
>> 
>> This is an attempt to summarize a really useful discussion that Victor,
>> Flavio and I have been having today. At the bottom are some background
>> links - basically what I have open in my browser right now thinking
>> through all of this.
>> 
>> We're attempting to take baby-steps towards moving completely from
>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>> first victim.
> 
> Has this been widely agreed on? It seems to me like we are mixing two
> issues:

Right. Does someone have a pointer to where this was decided?

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Mon, 2014-07-07 at 15:53 +0100, Gordon Sim wrote:
> On 07/07/2014 03:12 PM, Victor Stinner wrote:
> > The first step is to patch endpoints to add @trollius.coroutine to the 
> > methods,
> > and add yield From(...) on asynchronous tasks.
> 
> What are the 'endpoints' here? Are these internal to the oslo.messaging 
> library, or external to it?

The callback functions we dispatch to are called 'endpoint methods' -
e.g. they are methods on the 'endpoints' objects passed to
get_rpc_server().

> > Later we may modify Oslo Messaging to be able to call an RPC method
> > asynchronously, a method which would return a Trollius coroutine or task
> > directly. The problem is that Oslo Messaging currently hides 
> > "implementation"
> > details like eventlet.
> 
> I guess my question is how effectively does it hide it? If the answer to 
> the above is that this change can be contained within the oslo.messaging 
> implementation itself, then that would suggest its hidden reasonably well.
> 
> If, as I first understood (perhaps wrongly) it required changes to every 
> use of the oslo.messaging API, then it wouldn't really be hidden.
> 
> > Returning a Trollius object means that Oslo Messaging
> > will use explicitly Trollius. I'm not sure that OpenStack is ready for that
> > today.
> 
> The oslo.messaging API could evolve/expand to include explicitly 
> asynchronous methods that did not directly expose Trollius.

I'd expect us to add e.g.

  @asyncio.coroutine
  def call_async(self, ctxt, method, **kwargs):
  ...

to RPCClient. Perhaps we'd need to add an AsyncRPCClient in a separate
module and only add the method there - I don't have a good sense of it
yet.

However, the key thing is that I don't anticipate us needing to change
the current API in a backwards incompatible way.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Sun, 2014-07-06 at 09:28 -0400, Eoghan Glynn wrote:
> 
> > This is an attempt to summarize a really useful discussion that Victor,
> > Flavio and I have been having today. At the bottom are some background
> > links - basically what I have open in my browser right now thinking
> > through all of this.
> 
> Thanks for the detailed summary, it puts a more flesh on the bones
> than a brief conversation on the fringes of the Paris mid-cycle.
> 
> Just a few clarifications and suggestions inline to add into the
> mix.
> 
> > We're attempting to take baby-steps towards moving completely from
> > eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> > first victim.
> 
> First beneficiary, I hope :)
>  
> > Ceilometer's code is run in response to various I/O events like REST API
> > requests, RPC calls, notifications received, etc. We eventually want the
> > asyncio event loop to be what schedules Ceilometer's code in response to
> > these events. Right now, it is eventlet doing that.
> 
> Yes.
> 
> And there is one other class of stimulus, also related to eventlet,
> that is very important for triggering the execution of ceilometer
> logic. That would be the timed tasks that drive polling of:
> 
>  * REST APIs provided by other openstack services 
>  * the local hypervisor running on each compute node
>  * the SNMP daemons running at host-level etc.
> 
> and also trigger periodic alarm evaluation.
> 
> IIUC these tasks are all mediated via the oslo threadgroup's
> usage of eventlet.greenpool[1]. Would this logic also be replaced
> as part of this effort?

As part of the broader "switch from eventlet to asyncio" effort, yes
absolutely.

At the core of any event loop is code to do select() (or equivalents)
waiting for file descriptors to become readable or writable, or timers
to expire. We want to switch from the eventlet event loop to the asyncio
event loop.

The ThreadGroup abstraction from oslo-incubator is an interface to the
eventlet event loop. When you do:

  self.tg.add_timer(interval, self._evaluate_assigned_alarms)

You're saying "run evaluate_assigned_alarms() every $interval seconds,
using select() to sleep between executions".

When you do:

  self.tg.add_thread(self.start_udp)

you're saying "run some code which will either run to completion or set
wait for fd or timer events using select()".

The asyncio versions of those will be:

  event_loop.call_later(delay, callback)
  event_loop.call_soon(callback)

where the supplied callbacks will be asyncio 'coroutines' which rather
than doing:

  def foo(...):
  buf = read(fd)

and rely on eventlet's monkey patch to cause us to enter the event
loop's select() when the read() blocks, we instead do:

  @asyncio.coroutine
  def foo(...):
  buf = yield from read(fd)

which shows exactly where we might yield to the event loop.

The challenge is that porting code like the foo() function above is
pretty invasive and we can't simply port an entire service at once. So,
we need to be able to support a service using both eventlet-reliant code
and asyncio coroutines.

In your example of the openstack.common.threadgroup API - we would
initially need to add support for scheduling asyncio coroutine callback
arguments as eventlet greenthreads in add_timer() and add_thread(), and
later we would port threadgroup itself to rely completely on asyncio.

> > Now, because we're using eventlet, the code that is run in response to
> > these events looks like synchronous code that makes a bunch of
> > synchronous calls. For example, the code might do some_sync_op() and
> > that will cause a context switch to a different greenthread (within the
> > same native thread) where we might handle another I/O event (like a REST
> > API request)
> 
> Just to make the point that most of the agents in the ceilometer
> zoo tend to react to just a single type of stimulus, as opposed
> to a mix of dispatching from both message bus and the REST API.
> 
> So to classify, we'd have:
> 
>  * compute-agent: timer tasks for polling
>  * central-agent: timer tasks for polling
>  * notification-agent: dispatch of "external" notifications from
>the message bus
>  * collector: dispatch of "internal" metering messages from the
>message bus
>  * api-service: dispatch of REST API calls
>  * alarm-evaluator: timer tasks for alarm evaluation
>  * alarm-notifier: dispatch of "internal" alarm notifications
> 
> IIRC, the only case where there's a significant mix of trigger
> styles is the partitioned alarm evaluator, where assignments of
> alarm subsets for evaluation is driven over RPC, whereas the
> actual thresholding is triggered by a timer.

Cool, that's helpful. I think the key thing is deciding which stimulus
(and hence agent) we should start with.

> > Porting from eventlet's implicit async approach to asyncio's explicit
> > async API will be seriously time consuming and we need to be able to do
> > it piece-by-piece.
> 
> Yes, I agree, a step-wise approach is the key 

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/07/14 05:30, Mark McLoughlin wrote:
> Hey
> 
> This is an attempt to summarize a really useful discussion that Victor,
> Flavio and I have been having today. At the bottom are some background
> links - basically what I have open in my browser right now thinking
> through all of this.
> 
> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.

Has this been widely agreed on? It seems to me like we are mixing two
issues:
1) we need to move to py3
2) some people want to move from eventlet (I am not convinced that the
   volume of code changes warrants the end goal - and review load)

To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
on top of asyncio? - i.e. not require widespread code changes.

So we can maintain the main loop API but move to py3. I am not sure on
the feasibility, but seems to me like a more contained change.

- -Angus

> 
> Ceilometer's code is run in response to various I/O events like REST API
> requests, RPC calls, notifications received, etc. We eventually want the
> asyncio event loop to be what schedules Ceilometer's code in response to
> these events. Right now, it is eventlet doing that.
> 
> Now, because we're using eventlet, the code that is run in response to
> these events looks like synchronous code that makes a bunch of
> synchronous calls. For example, the code might do some_sync_op() and
> that will cause a context switch to a different greenthread (within the
> same native thread) where we might handle another I/O event (like a REST
> API request) while we're waiting for some_sync_op() to return:
> 
>   def foo(self):
>   result = some_sync_op()  # this may yield to another greenlet
>   return do_stuff(result)
> 
> Eventlet's infamous monkey patching is what make this magic happen.
> 
> When we switch to asyncio's event loop, all of this code needs to be
> ported to asyncio's explicitly asynchronous approach. We might do:
> 
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
> 
> or:
> 
>   @asyncio.coroutine
>   def foo(self):
>   fut = Future()
>   some_async_op(callback=fut.set_result)
>   ...
>   result = yield from fut
>   return do_stuff(result)
> 
> Porting from eventlet's implicit async approach to asyncio's explicit
> async API will be seriously time consuming and we need to be able to do
> it piece-by-piece.
> 
> The question then becomes what do we need to do in order to port a
> single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
> explicit async approach?
> 
> The plan is:
> 
>   - we stick with eventlet; everything gets monkey patched as normal
> 
>   - we register the greenio event loop with asyncio - this means that 
> e.g. when you schedule an asyncio coroutine, greenio runs it in a 
> greenlet using eventlet's event loop
> 
>   - oslo.messaging will need a new variant of eventlet executor which 
> knows how to dispatch an asyncio coroutine. For example:
> 
> while True:
> incoming = self.listener.poll()
> method = dispatcher.get_endpoint_method(incoming)
> if asyncio.iscoroutinefunc(method):
> result = method()
> self._greenpool.spawn_n(incoming.reply, result)
> else:
> self._greenpool.spawn_n(method)
> 
> it's important that even with a coroutine endpoint method, we send 
> the reply in a greenthread so that the dispatch greenthread doesn't
> get blocked if the incoming.reply() call causes a greenlet context
> switch
> 
>   - when all of ceilometer has been ported over to asyncio coroutines, 
> we can stop monkey patching, stop using greenio and switch to the 
> asyncio event loop
> 
>   - when we make this change, we'll want a completely native asyncio 
> oslo.messaging executor. Unless the oslo.messaging drivers support 
> asyncio themselves, that executor will probably need a separate
> native thread to poll for messages and send replies.
> 
> If you're confused, that's normal. We had to take several breaks to get
> even this far because our brains kept getting fried.
> 
> HTH,
> Mark.
> 
> Victor's excellent docs on asyncio and trollius:
> 
>   https://docs.python.org/3/library/asyncio.html
>   http://trollius.readthedocs.org/
> 
> Victor's proposed asyncio executor:
> 
>   https://review.openstack.org/70948
> 
> The case for adopting asyncio in OpenStack:
> 
>   https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio
> 
> A previous email I wrote about an asyncio executor:
> 
>  http://lists.openstack.org/pipermail/openstack-dev/2013-June/009934.html
> 
> The mock-up of an asyncio executor I wrote:
> 
>   
> https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/messaging/_executors/impl_tulip.py
>

Re: [openstack-dev] [OpenStack-Infra] [neutron] Specs repo not synced

2014-07-07 Thread Jeremy Stanley
On 2014-07-04 06:05:08 +0200 (+0200), Andreas Jaeger wrote:
> they should sync automatically, something is wrong on the infra site -
> let's tell them.

Yes, it seems someone uploaded a malformed change which Gerrit's
jgit backend was okay with but which GitHub is refusing, preventing
further replication of that repository from Gerrit to GitHub. The
situation is being tracked in https://launchpad.net/bugs/1337735 .
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Consumer Registration API

2014-07-07 Thread Adam Harwell
That sounds sensical to me. It actually still saves me work in the long-run, I 
think. :)

--Adam

https://keybase.io/rm_you


From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Date: Wednesday, July 2, 2014 9:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: John Wood mailto:john.w...@rackspace.com>>, Adam 
Harwell mailto:adam.harw...@rackspace.com>>
Subject: [barbican] Consumer Registration API

I was looking through some Keystone docs and noticed that for version 3.0 of 
their API [1] Keystone merged the Service and Admin API into a single core API. 
 I haven’t gone digging through mail archives, but I imagine they had a pretty 
good reason to do that.

Adam, I know you’ve already implemented quite a bit of this, and I hate to ask 
this, but how do you feel about adding this to the regular API instead of 
building out the Service API for Barbican?

[1] 
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3.md#whats-new-in-version-30


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Andrew Mann
What's the use case for an IPv6 endpoint? This service is just for instance
metadata, so as long as a requirement to support IPv4 is in place, using
solely an IPv4 endpoint avoids a number of complexities:
- Which one to try first?
- Which one is authoritative?
- Are both required to be present? I.e. can an instance really not have any
IPv4 support and expect to work?
- I'd presume the IPv6 endpoint would have to be link-local scope? Would
that mean that each subnet would need a compute metadata endpoint?


On Mon, Jul 7, 2014 at 12:28 PM, Vishvananda Ishaya 
wrote:

> I haven’t heard of anyone addressing this, but it seems useful.
>
> Vish
>
> On Jul 7, 2014, at 9:15 AM, Nir Yechiel  wrote:
>
> > AFAIK, the cloud-init metadata service can currently be accessed only by
> sending a request to http://169.254.169.254, and no IPv6 equivalent is
> currently implemented. Does anyone working on this or tried to address this
> before?
> >
> > Thanks,
> > Nir
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Joshua Harlow
So I've been thinking how to respond to this email, and here goes (shields
up!),

First things first; thanks mark and victor for the detailed plan and
making it visible to all. It's very nicely put together and the amount of
thought put into it is great to see. I always welcome an effort to move
toward a new structured & explicit programming model (which asyncio
clearly helps make possible and strongly encourages/requires).

So now to some questions that I've been thinking about how to
address/raise/ask (if any of these appear as FUD, they were not meant to
be):

* Why focus on a replacement low level execution model integration instead
of higher level workflow library or service (taskflow, mistral... other)
integration?

Since pretty much all of openstack is focused around workflows that get
triggered by some API activated by some user/entity having a new execution
model (asyncio) IMHO doesn't seem to be shifting the needle in the
direction that improves the scalability, robustness and crash-tolerance of
those workflows (and the associated projects those workflows are currently
defined & reside in). I *mostly* understand why we want to move to asyncio
(py3, getting rid of eventlet, better performance? new awesomeness...) but
it doesn't feel that important to actually accomplish seeing the big holes
that openstack has right now with scalability, robustness... Let's imagine
a different view on this; if all openstack projects declaratively define
the workflows there APIs trigger (nova is working on task APIs, cinder is
getting there to...), and in the future when the projects are *only*
responsible for composing those workflows and handling the API inputs &
responses then the need for asyncio or other technology can move out from
the individual projects and into something else (possibly something that
is being built & used as we speak). With this kind of approach the
execution model can be an internal implementation detail of the workflow
'engine/processor' (it will also be responsible for fault-tolerant, robust
and scalable execution). If this seems reasonable, then why not focus on
integrating said thing into openstack and move the projects to a model
that is independent of eventlet, asyncio (or the next greatest thing)
instead? This seems to push the needle in the right direction and IMHO
(and hopefully others opinions) has a much bigger potential to improve the
various projects than just switching to a new underlying execution model.

* Was the heat (asyncio-like) execution model[1] examined and learned from
before considering moving to asyncio?

I will try not to put words into the heat developers mouths (I can't do it
justice anyway, hopefully they can chime in here) but I believe that heat
has a system that is very similar to asyncio and coroutines right now and
they are actively moving to a different model due to problems in part due
to using that coroutine model in heat. So if they are moving somewhat away
from that model (to a more declaratively workflow model that can be
interrupted and converged upon [2]) why would it be beneficial for other
projects to move toward the model they are moving away from (instead of
repeating the issues the heat team had with coroutines, ex, visibility
into stack/coroutine state, scale limitations, interruptibility...)?

  
  * A side-question, how do asyncio and/or trollius support debugging, do
they support tracing individual co-routines? What about introspecting the
state a coroutine has associated with it? Eventlet at least has
http://eventlet.net/doc/modules/debug.html (which is better than nothing);
does an equivalent exist?

* What's the current thinking on avoiding the chaos (code-change-wise and
brain-power-wise) that will come from a change to asyncio?

This is the part that I really wonder about. Since asyncio isn't just a
drop-in replacement for eventlet (which hid the async part under its
*black magic*), I very much wonder how the community will respond to this
kind of mindset change (along with its new *black magic*). Will the
TC/foundation offer training, tutorials... on the change that this brings?
Should the community even care? If we say just focus on workflows & let
the workflow 'engine/processor' do the dirty work; then I believe the
community really doesn't need to care (and rightfully so) about how their
workflows get executed (either by taskflow, mistral, pigeons...). I
believe this seems like a fair assumption to make; it could even be
reinforced (I am not an expert here) with the defcore[4] work that seems
to be standardizing the integration tests that verify those workflows (and
associated APIs) act as expected in the various commercial implementations.

* Is the larger python community ready for this?

Seeing other responses for supporting libraries that aren't asyncio
compatible it doesn't inspire confidence that this path is ready to be
headed down. Partially this is due to the fact that its a completely new
programming model and alot of 

Re: [openstack-dev] [nova][libvirt] why use domain destroy instead of shutdown?

2014-07-07 Thread melanie witt
On Jul 4, 2014, at 3:11, "Day, Phil"  wrote:
> I have a BP (https://review.openstack.org/#/c/89650) and the first couple of 
> bits of implementation (https://review.openstack.org/#/c/68942/  
> https://review.openstack.org/#/c/99916/) out for review on this very topic ;-)

Great, I'll review and add my comments then. I'm interested in this because I 
have observed a problematic number of guest file system corruptions when 
migrating RHEL guests because of the hard power off. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-07 Thread Dolph Mathews
On Fri, Jul 4, 2014 at 12:31 AM, Steve Martinelli 
wrote:

> To add to the growing pains of keystone-specs, one thing I've noticed is,
> there is inconsistency in the 'REST API Impact' section.
>
> To be clear here, I don't mean we shouldn't include what new APIs will be
> created, I think that is essential. But rather, remove the need to
> specifically spell out the request and response blocks.
>
> Personally, I find it redundant for a few reasons:


Agree, we need to eliminate the redundancy...


>
>
> 1) We already have identity-api, which will need to be updated once the
> spec is completed anyway.


So my thinking is to merge the content of openstack/identity-api into
openstack/keystone-specs. We use identity-api just like we use
keystone-specs anyway, but only for a subset of our work.


>
> 2) It's easy to get bogged down in the spec review as it is, I don't want
> to have to point out mistakes in the request/response blocks too (as I'll
> need to do that when reviewing the identity-api patch anyway).


I personally see value in having them proposed as one patchset - it's all
design work, so I think it should be approved as a cohesive piece of design.


>
> 3) Come time to propose the identity-api patch, there might be differences
> in what was proposed in the spec.


There *shouldn't* be though... unless you're just talking about typos/etc.
It's possible to design an unimplementable or unusable API though, and that
can be discovered (at latest) by attempting an implementation... at that
point, I think it's fair to go back and revise the spec/API with the
solution.


>
>
> Personally I'd be OK with just stating the HTTP method and the endpoint.
> Thoughts?


Not all API-impacting changes introduce new endpoint/method combinations,
they may just add a new attribute to an existing resource - and this is
still a bit redundant with the identity-api repo.


>
> Many apologies in advance for my pedantic-ness!
>

Laziness*

(lazy engineers are just more efficient)


> Regards,
>
> *Steve Martinelli*
> Software Developer - Openstack
> Keystone Core Member
> --
>  *Phone:* 1-905-413-2851
> * E-mail:* *steve...@ca.ibm.com* 
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Vishvananda Ishaya
I haven’t heard of anyone addressing this, but it seems useful.

Vish

On Jul 7, 2014, at 9:15 AM, Nir Yechiel  wrote:

> AFAIK, the cloud-init metadata service can currently be accessed only by 
> sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> currently implemented. Does anyone working on this or tried to address this 
> before?
> 
> Thanks,
> Nir
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Trove Blueprint Meeting on 7 July canceled

2014-07-07 Thread Nikhil Manchanda
Hey folks:

There's nothing to discuss on the BP Agenda for this week and most folks
are busy working on existing BPs and bugs, so I'd like to cancel the
Trove blueprint meeting for this week.

See you guys at the regular Trove meeting on Wednesday.

Thanks,
Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Size of Log files

2014-07-07 Thread Dolph Mathews
On Mon, Jul 7, 2014 at 8:34 AM, Brant Knudson  wrote:

>
> Henry -
>
>
> On Mon, Jul 7, 2014 at 7:17 AM, Henry Nash 
> wrote:
>
>> Hi
>>
>> Our debug log file size is getting pretty hugea typical py26 jenkins
>> run produces a whisker under 50Mb of log - which is problematic for at
>> least the reason that our current jenkins setup consider the test run a
>> failure if the log file is > 50 Mb.  (see
>> http://logs.openstack.org/14/74214/40/check/gate-keystone-python26/1714702/subunit_log.txt.gz
>> as an example for a recent patch I am working on).  Obviously we could just
>> raise the limit, but we should probably also look at how effective our
>> logging is.  Reviewing of the log file listed above shows:
>>
>> 1) Some odd corruption.  I think this is related to the subunit
>> concatenation of output files, but haven't been able to find the exact
>> cause (looking a local subunit file shows some weird characters, but not as
>> bad as when as part of jenkins).  It may be that this corruption is dumping
>> more data than we need into the log file.
>>
>>
Bug report!


>  2) There are some spectacularly uninteresting log entries, e.g. 25 lines
>> of :
>>
>> Initialized with method overriding = True, and path info altering = True
>>
>> as part of each unit test call that uses routes! (This is generated as
>> part of the routes.middleware init)
>>
>> 3) Some seemingly over zealous logging, e.g. the following happens
>> multiple times per call:
>>
>> Parsed 2014-07-06T14:47:46.850145Z into {'tz_sign': None,
>> 'second_fraction': '850145', 'hour': '14', 'daydash': '06', 'tz_hour':
>> None, 'month': None, 'timezone': 'Z', 'second': '46', 'tz_minute': None,
>> 'year': '2014', 'separator': 'T', 'monthdash': '07', 'day': None, 'minute':
>> '47'} with default timezone 
>>
>> Got '2014' for 'year' with default None
>>
>> Got '07' for 'monthdash' with default 1
>>
>> Got 7 for 'month' with default 7
>>
>> Got '06' for 'daydash' with default 1
>>
>> Got 6 for 'day' with default 6
>>
>> Got '14' for 'hour' with default None
>>
>> Got '47' for 'minute' with default None
>>
>>
> The default log levels for the server are set in oslo-incubator's log
> module[1]. This is where it sets iso8601=WARN which should get rid of #3.
>
> In addition to these defaults, when the server starts it calls
> config.set_default_for_default_log_levels()[2] which sets the routes logger
> to INFO, which should take care of #2. The unit tests could do something
> similar.
>
> Maybe the tests can setup logging the same way.
>
> [1]
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/openstack/common/log.py?id=26364496ca292db25c2e923321d2366e9c4bedc3#n158
> [2]
> http://git.openstack.org/cgit/openstack/keystone/tree/bin/keystone-all#n116
>
>
>>  3) LDAP is VERY verbose, e.g. 30-50 lines of debug per call to the
>> driver.
>>
>> I'm happy to work to trim back some of worst excessesbut open to
>> ideas as to whether we need a more formal approach to this...perhaps a good
>> topic for our hackathon this week?
>>
>>
This would be a great topic, but given that this is a community-wide issue,
we already have some community-wide direction:

https://wiki.openstack.org/wiki/Security/Guidelines/logging_guidelines#Log_Level_Usage_Recommendations
https://wiki.openstack.org/wiki/LoggingStandards

We certainly have room to better adhere to these expectations (I also think
these two pages should be consolidated).


>  Henry
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-07 Thread Victor Lowther
As one of the original authors of dracut, I would love to see it being used
to build initramfs images for TripleO. dracut is flexible, works across a
wide variety of distros, and removes the need to have special-purpose
toolchains and packages for use by the initramfs.


On Thu, Jul 3, 2014 at 10:12 PM, Ben Nemec  wrote:

> I've recently been looking into using dracut to build the
> deploy-ramdisks that we use for TripleO.  There are a few reasons for
> this: 1) dracut is a fairly standard way to generate a ramdisk, so users
> are more likely to know how to debug problems with it.  2) If we build
> with dracut, we get a lot of the udev/net/etc stuff that we're currently
> doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
> doesn't include busybox, so we can't currently build ramdisks on that
> distribution using the existing ramdisk element.
>
> For the RHEL issue, this could just be an alternate way to build
> ramdisks, but given some of the other benefits I mentioned above I
> wonder if it would make sense to look at completely replacing the
> existing element.  From my investigation thus far, I think dracut can
> accommodate all of the functionality in the existing ramdisk element,
> and it looks to be available on all of our supported distros.
>
> So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
>  Thanks.
>
> https://dracut.wiki.kernel.org/index.php/Main_Page
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Suspend via virDomainSave() rather than virDomainManagedSave()

2014-07-07 Thread Vishvananda Ishaya
On Jul 6, 2014, at 10:22 PM, Rafi Khardalian  wrote:

> Hi All --
> 
> It seems as though it would be beneficial to use virDomainSave rather than 
> virDomainManagedSave for suspending instances.  The primary benefit of doing 
> so would be to locate the save files within the instance's dedicated 
> directory.  As it stands suspend operations are utilizing ManagedSave, which 
> places all save files in a single directory (/var/lib/libvirt/qemu/save by 
> default on Ubuntu).  This is the only instance-specific state data which 
> lives both outside the instance directory and the database.  Also, 
> ManagedSave does not consider Libvirt's "save_image_format" directive and 
> stores all saves as raw, rather than offering the various compression options 
> available when DomainSave is used.
> 
> ManagedSave is certainly easier but offers less control than what I think is 
> desired in this case.  Is there anything I'm missing?  If not, would folks be 
> open to this change?

+1

Vish

> 
> Thanks,
> Rafi
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Suspend via virDomainSave() rather than virDomainManagedSave()

2014-07-07 Thread Daniel P. Berrange
On Sun, Jul 06, 2014 at 10:22:44PM -0700, Rafi Khardalian wrote:
> Hi All --
> 
> It seems as though it would be beneficial to use virDomainSave rather than
> virDomainManagedSave for suspending instances.  The primary benefit of
> doing so would be to locate the save files within the instance's dedicated
> directory.  As it stands suspend operations are utilizing ManagedSave,
> which places all save files in a single directory
> (/var/lib/libvirt/qemu/save by default on Ubuntu).  This is the only
> instance-specific state data which lives both outside the instance
> directory and the database.

Yes, that is a bit of an oddity from OpenStack's POV. 

>  Also, ManagedSave does not consider Libvirt's
> "save_image_format" directive and stores all saves as raw, rather than
> offering the various compression options available when DomainSave is used.

That's not correct. Both APIs use the 'save_image_format' config
parameter in the same way, at least with current libvirt versions.

> ManagedSave is certainly easier but offers less control than what I think
> is desired in this case.  Is there anything I'm missing?  If not, would
> folks be open to this change?

The main difference between Save & ManagedSave, is that with ManagedSave,
any attempt to start the guest will automatically restore from the save
image. So if we changed to use Save, there would need to be a bunch of
work to make sure all relevant code paths use 'virDomainRestore' instead
of virDomainCreate, when there is a save image in the instances directory.

I don't have strong opinion on which is "best" to use really. AFAICT, with
suitable coding, either can be made to satisfy Nova's functional needs.

So to me it probably comes down to a question as to how important it is
to have the save images in the instances directory. You're rationale above
feels mostly to be about "cleanliness" of having everything in one place.
Could there be any functional downsides or upsides to having them in the
instances directory ? eg if the instances directory is on NFS, so does
give us a compelling reason to choose one approach vs the other ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >