[openstack-dev] [nova] A problem about pci-passthrough

2014-03-03 Thread Liuji (Jeremy)
Hi, all

I find a problem about pci-passthrough.

Test scenario:
1)There are two compute nodes in the environment named A and B. A has two NICs 
of vendor_id='8086' and product_id='105e', B has two NICs of vendor_id='8086' 
and product_id='10c9'.
2)I configured pci_alias={vendor_id:8086, product_id:10c9, 
name:a1} in nova.conf on the controller node, and of course the 
pci_passthrough_whitelist on this two compute nodes seperately.
3)Finally, a flavor named MyTest with extra_specs= {u'pci_passthrough:alias': 
u'a1:1'}
4)When I create a new instance with the MyTest flavor, it starts or is error 
randomly.

The problem is in the _schedule function of nova/scheduler/filter_scheduler.py:
chosen_host = random.choice(
weighed_hosts[0:scheduler_host_subset_size])
selected_hosts.append(chosen_host)

# Now consume the resources so the filter/weights
# will change for the next instance.
chosen_host.obj.consume_from_instance(instance_properties)

while scheduler_host_subset_size is configured to 2, the weighed_hosts are A 
and B, but the chosen_host is selected randomly.
When chosen_host is B, the instance starts, but when chosen_host is A, the 
instance becomes error. The consume_from_instance will raise a exception.

I think it is a bug. Is there a problem with my test operation or need some 
other configuration?

Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's the name of IRC channel for novaclient?

2014-03-03 Thread wu jiang
Hi all,

I want to know what's the name of IRC channel for novaclient? I want to ask
something about one BP.

And I don't know whether it exists or not, I don't find it in wiki[1].

If you know the information, please tell me. Thanks very much.


Best Wishes,
wingwj

---
[1]. https://wiki.openstack.org/wiki/IRC
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Questions about guest NUMA and memory binding policies

2014-03-03 Thread Liuji (Jeremy)
Hi, all

I search the current blueprints and old mails in the mail list, but find 
nothing about Guest NUMA and setting memory binding policies.
I just find a blueprint about vcpu topology and a blueprint about CPU binding.

https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
https://blueprints.launchpad.net/nova/+spec/numa-aware-cpu-binding

Is there any plan for the guest NUMA and memory binding policies setting?

Thanks,
Jeremy Liu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's the name of IRC channel for novaclient?

2014-03-03 Thread Gary Kotton
Hi,
This is the same channel as nova – that is #openstack-nova
Thanks
Gary

From: wu jiang win...@gmail.commailto:win...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 3, 2014 10:11 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] What's the name of IRC channel for novaclient?

Hi all,

I want to know what's the name of IRC channel for novaclient? I want to ask 
something about one BP.

And I don't know whether it exists or not, I don't find it in wiki[1].

If you know the information, please tell me. Thanks very much.


Best Wishes,
wingwj

---
[1]. 
https://wiki.openstack.org/wiki/IRChttps://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki/IRCk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=rSQ7EiJDzvK1G5N5PJ%2F7c%2F%2BRab4oueu7AYnZLnM6rtI%3D%0As=c2fa94b9e6cbf44dc5ba85bd824cad3a4704928d51775f57782da3ef7bb8bc9e

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Page layout questions

2014-03-03 Thread Ekaterina Fedorova
Hi everyone!


During investigation of this bug [1] I found weird things in horizon
page layout. And it looks like there are errors deep inside.

First, the* 'div.sidebar'* container has property *'float:left'
*without any width set while all guidelines [2] say that the width
should be set.

Was it done for some purposes? Should the width of the .sidebar be
static or dynamic (or limited with the max-width), depending on the
registerd dashboard amount?


Second, the *'div.main_content'* has property 'padding-right: 250px' -
obviosly to not overlap with *'div.sidebar'.* I guess this is wrong
since *'div.main_content'* should flow around the* 'div.sidebar'*. Am
I right?


[1] https://bugs.launchpad.net/horizon/+bug/1286093
[2] http://css.maxdesign.com.au/floatutorial/introduction.htm


Regards, Kate.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread Zhangleiqiang
Hi, stackers:

Libvirt/qemu have supported online-extend for multiple disk formats, 
including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
currently. 

Offline-extend volume will force the instance to be shutoff or the volume 
to be detached. I think it will be useful if we introduce the online-extend 
feature to cinder, especially for the file system based driver, e.g. nfs, 
glusterfs, etc.

Is there any other suggestions?

Thanks.


--
zhangleiqiang

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nova failed to spawn when download disk image from Glance timeout

2014-03-03 Thread Nora Zhou
Hi,


I recently deploy Bare-metal node instance using Heat Template. However, Nova 
failed to spawn due to a timeout error. When I look into the code I found that 
the timeout is related to Nova downloading disk image from Glance. The 
nova-schedule.log shows below:

2014-02-28 02:49:48.046 2136 ERROR nova.compute.manager 
[req-09e61b23-436f-4425-8db0-10dd1aea2e39 85bbc1abb4254761a5452654a6934b75 
692e595702654930936a65d1a658cff4] [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Instance failed to spawn

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Traceback (most recent call last):

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1417, in 
_spawn/ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] network_info=network_info,

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py, line 444, in 
cache_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] self._cache_tftp_images(context, 
instance, tftp_image_info)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py, line 335, in 
_cache_tftp_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
[instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] 
project_id=instance['project_id'],

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/baremetal/utils.py, line 33, in 
cache_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] user_id, project_id)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py, line 645, in 
fetch_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/images.py, line 196, in 
fetch_to_raw 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/virt/images.py, line 190, in fetch 
2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] image_service.download(context, image_id, 
dst_path=path)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/nova/image/glance.py, line 360, in download 
2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] for chunk in image_chunks:

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 478, in 
__iter__ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self.next()

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 494, in 
next 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self._resp.read(CHUNKSIZE)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File /usr/lib/python2.7/httplib.py, 
line 561, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
[instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] s = self.fp.read(amt)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File /usr/lib/python2.7/socket.py, line 
380, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] data = self._sock.recv(left)

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
/usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 262, in recv 
2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] timeout_exc=socket.timeout(timed out))

2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 

Re: [openstack-dev] [nova] bp proposal: New per-aggregate filters targeted for icehouse-3

2014-03-03 Thread sahid
Greetings,

I wanted to ask if some cores could take a look at these reviews, The code
was pushed since 2 months and didn't get a lot of reviews. All of these 
blueprints
are approved for icehouse-3.

https://review.openstack.org/#/c/65452/
https://review.openstack.org/#/c/65108/
https://review.openstack.org/#/c/65474/

Thanks a lot,
s.


- Original Message -
From: sahid sahid.ferdja...@cloudwatt.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 5:14:49 PM
Subject: [openstack-dev] [nova] bp proposal: New per-aggregate filters targeted 
for icehouse-3

Hi there,

The deadline of blueprints approval comes really quickly and I understand that 
there
are a lot of work todo, but I would like get your attention about 3 new filters 
targeted 
for icehouse-3 and with a code already in review.

 - 
https://blueprints.launchpad.net/nova/+spec/per-aggregate-disk-allocation-ratio
 - 
https://blueprints.launchpad.net/nova/+spec/per-aggregate-max-instances-per-host
 - https://blueprints.launchpad.net/nova/+spec/per-aggregate-max-io-ops-per-host

The main aim of these bp is to relocate the configurations per aggregate. And 
these 
features are interesting when building a large cloud.

Can you take a small amount of time to let me know how to move forward with 
them?

Thanks a lot,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's the name of IRC channel for novaclient?

2014-03-03 Thread wu jiang
Hi Gary,

Thanks for your information. I'll consult the BP in there. :)


wingwj



On Mon, Mar 3, 2014 at 4:34 PM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 This is the same channel as nova - that is #openstack-nova
 Thanks
 Gary

 From: wu jiang win...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 3, 2014 10:11 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] What's the name of IRC channel for novaclient?

 Hi all,

 I want to know what's the name of IRC channel for novaclient? I want to
 ask something about one BP.

 And I don't know whether it exists or not, I don't find it in wiki[1].

 If you know the information, please tell me. Thanks very much.


 Best Wishes,
 wingwj

 ---
 [1]. 
 https://wiki.openstack.org/wiki/IRChttps://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki/IRCk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=rSQ7EiJDzvK1G5N5PJ%2F7c%2F%2BRab4oueu7AYnZLnM6rtI%3D%0As=c2fa94b9e6cbf44dc5ba85bd824cad3a4704928d51775f57782da3ef7bb8bc9e



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-03-03 Thread Thierry Carrez
James Slagle wrote:
 I'd like to ask that the following repositories for TripleO be included
 in next week's cutting of icehouse-3:
 
 http://git.openstack.org/openstack/tripleo-incubator
 http://git.openstack.org/openstack/tripleo-image-elements
 http://git.openstack.org/openstack/tripleo-heat-templates
 http://git.openstack.org/openstack/diskimage-builder
 http://git.openstack.org/openstack/os-collect-config
 http://git.openstack.org/openstack/os-refresh-config
 http://git.openstack.org/openstack/os-apply-config
 
 Are you willing to run through the steps on the How_To_Release wiki for
 these repos, or should I do it next week? Just let me know how or what
 to coordinate. Thanks.

I looked into more details and there are a number of issues as TripleO
projects were not really originally configured to be released.

First, some basic jobs are missing, like a tarball job for
tripleo-incubator.

Then the release scripts are made for integrated projects, which follow
a number of rules that TripleO doesn't follow:

- One Launchpad project per code repository, under the same name (here
you have tripleo-* under tripleo + diskimage-builder separately)
- The person doing the release should be a driver (or release
manager) for the project (here, Robert is the sole driver for
diskimage-builder)
- Projects should have an icehouse series and a icehouse-3 milestone

Finally the person doing the release needs to have push annotated tags
/ create reference permissions over refs/tags/* in Gerrit. This seems
to be missing for a number of projects.

In all cases I'd rather limit myself to incubated/integrated projects,
rather than extend to other projects, especially on a busy week like
feature freeze week. So I'd advise that for icehouse-3 you follow the
following simplified procedure:

- Add missing tarball-creation jobs
- Add missing permissions for yourself in Gerrit
- Skip milestone-proposed branch creation
- Push tag on master when ready (this will result in tarballs getting
built at tarballs.openstack.org)

Optionally:
- Create icehouse series / icehouse-3 milestone for projects in LP
- Manually create release and upload resulting tarballs to Launchpad
milestone page, under the projects that make the most sense (tripleo-*
under tripleo, etc)

I'm still a bit confused with the goals here. My original understanding
was that TripleO was explicitly NOT following the release cycle. How
much of the integrated projects release process do you want to reuse ?
We do a feature freeze on icehouse-3, then bugfix on master until -rc1,
then we cut an icehouse release branch (milestone-proposed), unfreeze
master and let it continue as Juno. Is that what you want to do too ? Do
you want releases ? Or do you actually just want stable branches ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-03 Thread Thierry Carrez
Sean Dague wrote:
 On 03/01/2014 06:30 PM, John Griffith wrote:
 Something that came up recently in the Cinder project is that one of the
 backend device vendors wasn't happy with a feature that somebody was
 working on and contributed a patch for.  Instead of providing a
 meaningful review and suggesting alternatives to the patch they set up
 meetings with other vendors leaving the active members of the community
 out and picked things apart in their own format out of the public view.
  Nobody from the core Cinder team was involved in these discussions or
 meetings (at least that I've been made aware of).

It's not only sad, it's also extremely ineffective... So it's dumb.

 I don't want to go into detail about who, what, where etc at this point.
  I instead, I want to point out that in my opinion this is no way to
 operate in an Open Source community.  Collaboration is one thing, but
 ambushing other peoples work is entirely unacceptable in my opinion.
  OpenStack provides a plethora of ways to participate and voice your
 opinion, whether it be this mailing list, the IRC channels which are
 monitored daily and also host a published weekly meeting for most
 projects.  Of course when in doubt you're welcome to send me an email at
 any time with questions or concerns that you have about a patch.  In any
 case however the proper way to address concerns about a submitted patch
 is to provide a review for that patch.
 
 Honestly, while I realize you don't want to name names, I actually want
 to know about bad actors in our community. Because I think that if bad
 actors aren't exposed, then they tend to keep up the bad behavior.
 
 Social pressure is important here.

One side of me would prefer if everyone was friends and we did never
blame anyone for defective behavior in our community. But, as I wrote
recently[1], it's only a matter of time until we face problems where the
only solution is to readjust the institutional and reputational
pressures. It's the price to pay so that cooperation stays the norm.

It will certainly hurt the first one we nail on the wall. So here is one
reputational pressure: you don't want to be that company.

[1] http://fnords.wordpress.com/2014/02/24/the-dilemma-of-open-innovation/

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Questions about syncing non-imported files

2014-03-03 Thread ChangBo Guo
1)
I found modules tracked in openstack-common.conf is not consistent with
actual
modules in directoy 'openstack/common' in some projects like Nova. I
drafted a script
to enforce the check in https://review.openstack.org/#/c/76901/. Maybe need
more work
to improve it. Please help review :).

2)
Some projects include README ,which is out of date in direcotry
'openstack/common'
like Nova, Cinder. But other projects don't include it. Should we keep the
file in
directory 'openstack/common'? or move to other pace or just remove it.

3) What kind of module can be recorded in openstack-common.conf ? only
modules in
directory openstack/common ? This is an example:
https://github.com/openstack/nova/blob/master/openstack-common.conf#L17

4) We have some useful check scripts in tools, is there any plan and rule
to
 sync them to downstream projects ? I would like to be volunteer for this.

-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Need unique ID for every Network Service

2014-03-03 Thread Srikanth Kumar Lingala
Yes..I will send a mail to Eugene Nikanorov, requesting to add this to the 
agenda in the coming weekly discussion.
Detailed requirement is as follows:
In the current implementation, only one LBaaS configuration is possible per 
tenant. It is better to have multiple LBaaS configurations for each tenant.
We are planning to configure haproxy as VM in a Network Service Chain. In a 
chain, there may be possibility of multiple Network Services of the same type 
(For Eg: Haproxy). For that, each Network Service should have a Unique ID 
(UUID) for a tenant.

Regards,
Srikanth.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Saturday, March 01, 2014 1:22 AM
To: Veera Reddy
Cc: Lingala Srikanth Kumar-B37208; openstack-dev@lists.openstack.org; openstack
Subject: Re: [Openstack] Need unique ID for every Network Service

Hi y'all!

The ongoing debate in the LBaaS group is whether the concept of a 
'Loadbalancer' needs to exist  as an entity. If it is decided that we need it, 
I'm sure it'll have a unique ID. (And please feel free to join the discussion 
on this as well, eh!)

Stephen

On Thu, Feb 27, 2014 at 10:27 PM, Veera Reddy 
veerare...@gmail.commailto:veerare...@gmail.com wrote:
Hi,

Good idea to have unique id for each entry of network functions.
So that we can configure multiple network function with different configuration.


Regards,
Veera.

On Fri, Feb 28, 2014 at 11:23 AM, Srikanth Kumar Lingala 
srikanth.ling...@freescale.commailto:srikanth.ling...@freescale.com wrote:
Hi-
In the existing Neutron, we have FWaaS, LBaaS, VPNaaS …etc.
In FWaaS, each Firewall has its own UUID.
It is good to have a unique ID [UUID] for LBaaS also.

Please share your comments on the above.

Regards,
Srikanth.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Need unique ID for every Network Service

2014-03-03 Thread Srikanth Kumar Lingala
Hi Nikanorov,
Please add the below requirement to the agenda of coming weekly discussion on 
Thursday.

Regards,
Srikanth.

From: Srikanth Kumar Lingala [mailto:srikanth.ling...@freescale.com]
Sent: Monday, March 03, 2014 5:18 PM
To: Stephen Balukoff; Veera Reddy
Cc: openstack-dev@lists.openstack.org; openstack
Subject: Re: [openstack-dev] [Openstack] Need unique ID for every Network 
Service

Yes..I will send a mail to Eugene Nikanorov, requesting to add this to the 
agenda in the coming weekly discussion.
Detailed requirement is as follows:
In the current implementation, only one LBaaS configuration is possible per 
tenant. It is better to have multiple LBaaS configurations for each tenant.
We are planning to configure haproxy as VM in a Network Service Chain. In a 
chain, there may be possibility of multiple Network Services of the same type 
(For Eg: Haproxy). For that, each Network Service should have a Unique ID 
(UUID) for a tenant.

Regards,
Srikanth.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Saturday, March 01, 2014 1:22 AM
To: Veera Reddy
Cc: Lingala Srikanth Kumar-B37208; 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
openstack
Subject: Re: [Openstack] Need unique ID for every Network Service

Hi y'all!

The ongoing debate in the LBaaS group is whether the concept of a 
'Loadbalancer' needs to exist  as an entity. If it is decided that we need it, 
I'm sure it'll have a unique ID. (And please feel free to join the discussion 
on this as well, eh!)

Stephen

On Thu, Feb 27, 2014 at 10:27 PM, Veera Reddy 
veerare...@gmail.commailto:veerare...@gmail.com wrote:
Hi,

Good idea to have unique id for each entry of network functions.
So that we can configure multiple network function with different configuration.


Regards,
Veera.

On Fri, Feb 28, 2014 at 11:23 AM, Srikanth Kumar Lingala 
srikanth.ling...@freescale.commailto:srikanth.ling...@freescale.com wrote:
Hi-
In the existing Neutron, we have FWaaS, LBaaS, VPNaaS …etc.
In FWaaS, each Firewall has its own UUID.
It is good to have a unique ID [UUID] for LBaaS also.

Please share your comments on the above.

Regards,
Srikanth.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Dina Belova
Hello Joe, Thierry, Sylvain.

Joe, I pretty agree with Sylvain in how he had described Climate idea. I
hope it is more understandable now.

Thierry, thanks for answering. I'm sorry I did not send this email before :)

Thanks
Dina


On Mon, Mar 3, 2014 at 4:42 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

 Hi Joe,

 Thanks for your reply, I'll try to further explain.


 Le 03/03/2014 05:33, Joe Gordon a écrit :

  On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
 wrote:

 Hello, folks!

 I'd like to request Climate project review for incubation. Here is
 official
 incubation application:

 https://wiki.openstack.org/wiki/Climate/Incubation

 I'm unclear on what Climate is trying to solve. I read the 'Detailed
 Description' from the link above, and it states Climate is trying to
 solve two uses cases (and the more generalized cases of those).

 1) Compute host reservation (when user with admin privileges can
 reserve hardware resources that are dedicated to the sole use of a
 tenant)
 2) Virtual machine (instance) reservation (when user may ask
 reservation service to provide him working VM not necessary now, but
 also in the future)

 Climate is born from the idea of dedicating compute resources to a single
 tenant or user for a certain amount of time, which was not yet implemented
 in Nova: how as an user, can I ask Nova for one compute host with certain
 specs to be exclusively allocated to my needs, starting in 2 days and being
 freed in 5 days ?

 Albeit the exclusive resource lock can be managed on the Nova side, there
 is currently no possibilities to ensure resource planner.

 Of course, and that's why we think Climate can also stand by its own
 Program, resource reservation can be seen on a more general way : what
 about reserving an Heat stack with its volume and network nested resources ?


  You want to support being able to reserve an instance in the future.
 As a cloud operator how do I take advantage of that information? As a
 cloud consumer, what is the benefit? Today OpenStack supports both
 uses cases, except it can't request an Instance for the future.


 Again, that's not only reserving an instance, but rather a complex mix of
 resources. At the moment, we do provide way to reserve virtual instances by
 shelving/unshelving them at the lease start, but we also give possibility
 to provide dedicated compute hosts. Considering it, the logic of resource
 allocation and scheduling (take the word as resource planner, in order not
 to confuse with Nova's scheduler concerns) and capacity planning is too big
 to fail under the Compute's umbrella, as it has been agreed within the
 Summit talks and periodical threads.

 From the user standpoint, there are multiple ways to integrate with
 Climate in order to get Capacity Planning capabilities. As you perhaps
 noticed, the workflow for reserving resources is different from one plugin
 to another. Either we say the user has to explicitly request for dedicated
 resources (using Climate CLI, see dedicate compute hosts allocation), or we
 implicitly integrate resource allocation from the Nova API (see virtual
 instance API hook).

 We truly accept our current implementation as a first prototype, where
 scheduling decisions can be improved (possibly thanks to some tight
 integration with a future external Scheduler aaS, hello Gantt), where also
 resource isolation and preemption must also be integrated with subprojects
 (we're currently seeing how to provision Cinder volumes and Neutron routers
 and nets), but anyway we still think there is a (IMHO big) room for
 resource and capacity management on its own project.

 Hoping it's clearer now,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Sylvain Bauza

Forgot to put openstack-tc@ in the loop... Sorry for resending this email.

-Sylvain

Le 03/03/2014 13:42, Sylvain Bauza a écrit :

Hi Joe,

Thanks for your reply, I'll try to further explain.


Le 03/03/2014 05:33, Joe Gordon a écrit :
On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com 
wrote:

Hello, folks!

I'd like to request Climate project review for incubation. Here is 
official

incubation application:

https://wiki.openstack.org/wiki/Climate/Incubation

I'm unclear on what Climate is trying to solve. I read the 'Detailed
Description' from the link above, and it states Climate is trying to
solve two uses cases (and the more generalized cases of those).

1) Compute host reservation (when user with admin privileges can
reserve hardware resources that are dedicated to the sole use of a
tenant)
2) Virtual machine (instance) reservation (when user may ask
reservation service to provide him working VM not necessary now, but
also in the future)
Climate is born from the idea of dedicating compute resources to a 
single tenant or user for a certain amount of time, which was not yet 
implemented in Nova: how as an user, can I ask Nova for one compute 
host with certain specs to be exclusively allocated to my needs, 
starting in 2 days and being freed in 5 days ?


Albeit the exclusive resource lock can be managed on the Nova side, 
there is currently no possibilities to ensure resource planner.


Of course, and that’s why we think Climate can also stand by its own 
Program, resource reservation can be seen on a more general way : what 
about reserving an Heat stack with its volume and network nested 
resources ?



You want to support being able to reserve an instance in the future.
As a cloud operator how do I take advantage of that information? As a
cloud consumer, what is the benefit? Today OpenStack supports both
uses cases, except it can't request an Instance for the future.


Again, that’s not only reserving an instance, but rather a complex mix 
of resources. At the moment, we do provide way to reserve virtual 
instances by shelving/unshelving them at the lease start, but we also 
give possibility to provide dedicated compute hosts. Considering it, 
the logic of resource allocation and scheduling (take the word as 
resource planner, in order not to confuse with Nova’s scheduler 
concerns) and capacity planning is too big to fail under the Compute’s 
umbrella, as it has been agreed within the Summit talks and periodical 
threads.


From the user standpoint, there are multiple ways to integrate with 
Climate in order to get Capacity Planning capabilities. As you perhaps 
noticed, the workflow for reserving resources is different from one 
plugin to another. Either we say the user has to explicitly request 
for dedicated resources (using Climate CLI, see dedicate compute hosts 
allocation), or we implicitly integrate resource allocation from the 
Nova API (see virtual instance API hook).


We truly accept our current implementation as a first prototype, where 
scheduling decisions can be improved (possibly thanks to some tight 
integration with a future external Scheduler aaS, hello Gantt), where 
also resource isolation and preemption must also be integrated with 
subprojects (we’re currently seeing how to provision Cinder volumes 
and Neutron routers and nets), but anyway we still think there is a 
(IMHO big) room for resource and capacity management on its own project.


Hoping it's clearer now,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6]

2014-03-03 Thread Randy Tuttle
Hi Yuhan

I am a bit familiar with this change as we tried it in out POC [1] for IPv6
dual-stack. It achieves a similar function, as best as I understand, by
allowing multiple external bridges (and, therefore, external interfaces).
The feature I am providing here achieves approximately the same thing
(explicitly, a dual-stack arrangement) on the *same* interface, thereby
requiring only a *single* external bridge (and *single* external interface).

I think it just gives more flexibility. We can surely discuss.

Thanks for your comments!! Good discussion.
Randy

[1] http://www.nephos6.com/pdf/OpenStack-Havana-on-IPv6.pdf


On Mon, Mar 3, 2014 at 2:33 AM, Xuhan Peng pengxu...@gmail.com wrote:

 Randy,

 I haven't checked the code detail yet, but I have a general question about
 this blueprint. Considering multiple external networks on L3 agent is
 supported [1]. Do you think it's still necessary to use separate subnets on
 one external network for IPv4 and IPv6 instead of using two external
 networks?

 [1] https://review.openstack.org/#/c/59359/

 Thanks!
 Xuhan



 On Mon, Mar 3, 2014 at 10:47 AM, Randy Tuttle randy.m.tut...@gmail.comwrote:

 Hi all.

 Just submitted the code[1] for supporting dual-stack (IPv6 and IPv4) on
 an external gateway port of a tenant's router (l3_agent). It implements [2].

 Please, if you would, have a look and provide any feedback. I would be
 grateful.

 Cheers,
 Randy

 [1]. https://review.openstack.org/#/c/77471/
 [2].
 https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-03 Thread Xuhan Peng
I created a new blueprint [1] which is triggered by the requirement to
allow IPv6 Router Advertisement security group rule on compute node in my
on-going code review [2].

Currently, only security group rule direction, protocol, ethertype and port
range are supported by neutron security group rule data structure. To allow
Router Advertisement coming from network node or provider network to VM on
compute node, we need to specify ICMP type to only allow RA from known
hosts (network node dnsmasq binded IP or known provider gateway).

To implement this and make the implementation extensible, maybe we can add
an additional table name SecurityGroupRuleData with Key, Value and ID in
it. For ICMP type RA filter, we can add key=icmp-type value=134, and
security group rule to the table. When other ICMP type filters are needed,
similar records can be stored. This table can also be used for other
firewall rule key values.
API change is also needed.

Please let me know your comments about this blueprint.

[1]
https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type-filter
[2] https://review.openstack.org/#/c/72252/

Thank you!
Xuhan Peng
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Need unique ID for every Network Service

2014-03-03 Thread WICKES, ROGER
Maybe I am misunderstanding the debate, but imho Every OpenStack Service (XaaS) 
needs to be listed in the Service Catalog as being available (and stable and 
tested), and every instance of that service, when started, needs a service ID, 
and every X created by that service needs a UUID aka object id. This is 
regardless of how many of them are per tenant or host or whatever. This 
discussion may be semantics, but just to be clear, LBaaS is the service that is 
called to create an LB. 

On the surface, it makes sense that you would only have one Service running per 
tenant; every object or instantiation created by that service (a Load Balancer, 
in this case) must have a UUID. I can't imagine why you would want multiple 
LBaaS services running at the same time, but again my imagination is limited. I 
am sure someone else has more imagination, such as a tenant having two vApps 
located on hosts in two different data centers, and they want an LBaaS in each 
data center since their inventory system or whatever is restricted to a single 
data center. If there were like two or three LBaaS' running, how would Neutron 
or Heat etc. know which one to call (criteria) when the network changes? It 
would be like having two butlers. 

A UUID on each Load Balancer is needed for alarming, callbacks, service 
assurance, service delivery, service availability monitoring and reporting, 
billing, compliance audits, and simply being able to modify the service. If 
there is an n-ary tuple relationship between LB and anything, you might be 
inclined to restrict only one LB per vApp. However, for ultra-high volume and 
high-availability apps we may want cross-redundant LB's with a third LB in 
front of the first two; that way if one gets overloaded or crashes, we can 
route to the other. A user might want to even mix and match hard and soft LB's 
in a hybrid environment. So, even in that use case, restricting the number of 
LB's or their tupleness is restrictive. 

I also want to say to those who are struggling with reasonable n-ary 
relationship modeling: This is just a problem with global app development, 
where there are so many use cases out there. It's tough to never say never, as 
in, you would never want more than one LBaaS per tenant. 

[Roger] --
From: Srikanth Kumar Lingala [mailto:srikanth.ling...@freescale.com]
Sent: Monday, March 03, 2014 5:18 PM
To: Stephen Balukoff; Veera Reddy
Cc: openstack-dev@lists.openstack.org; openstack
Subject: Re: [openstack-dev] [Openstack] Need unique ID for every Network 
Service

Yes..I will send a mail to Eugene Nikanorov, requesting to add this to the 
agenda in the coming weekly discussion.
Detailed requirement is as follows:
In the current implementation, only one LBaaS configuration is possible per 
tenant. It is better to have multiple LBaaS configurations for each tenant.
We are planning to configure haproxy as VM in a Network Service Chain. In a 
chain, there may be possibility of multiple Network Services of the same type 
(For Eg: Haproxy). For that, each Network Service should have a Unique ID 
(UUID) for a tenant.

Regards,
Srikanth.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Saturday, March 01, 2014 1:22 AM
To: Veera Reddy
Cc: Lingala Srikanth Kumar-B37208; 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
openstack
Subject: Re: [Openstack] Need unique ID for every Network Service

Hi y'all!

The ongoing debate in the LBaaS group is whether the concept of a 
'Loadbalancer' needs to exist  as an entity. If it is decided that we need it, 
I'm sure it'll have a unique ID. (And please feel free to join the discussion 
on this as well, eh!)

Stephen

On Thu, Feb 27, 2014 at 10:27 PM, Veera Reddy 
veerare...@gmail.commailto:veerare...@gmail.com wrote:
Hi,

Good idea to have unique id for each entry of network functions.
So that we can configure multiple network function with different configuration.


Regards,
Veera.

On Fri, Feb 28, 2014 at 11:23 AM, Srikanth Kumar Lingala 
srikanth.ling...@freescale.commailto:srikanth.ling...@freescale.com wrote:
Hi-
In the existing Neutron, we have FWaaS, LBaaS, VPNaaS ?etc.
In FWaaS, each Firewall has its own UUID.
It is good to have a unique ID [UUID] for LBaaS also.

Please share your comments on the above.

Regards,
Srikanth.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Blueprint for password decryption status

2014-03-03 Thread Alessandro Pilotti
Hi guys,

By checking the status of the patch at [1] by arezmerita, I noticed that it 
didn’t get any reviews since the last upload on Jan 31th (after a comprehensive 
round of reviews started on Dec 10th).

During a chat on #openstack-horizon, jpich noticed that the BP [2] was not 
targetted, so it didn’t get on the radar for any milestone. The milestone has 
now been set to Icehouce.

I know that it’s really late for I3, but since the patch has already been 
reviewed and all the concerns / suggestions have been addressed, I wonder if 
this could still be considered for merging.


The blueprint addresses a fundamental issue for Windows guests VMs: handling 
passwords in a secure way.

Password encryption is a feature that is present in Nova since Grizzly (see 
nova get-password [3]), but most users are still using the totally unsafe 
clear text “admin_pass option for the
simple reason that the dashboard does not provide a way to manage password 
decryption.

Ala’s patch provides IMO a proper solution for this issue, so it’d be great if 
we could have it in Icehouse, whithout having to wait for Juno.


Thanks,

Alessandro

P.S.: Note: I’m not the owner of the patch or the blueprint.

[1] https://review.openstack.org/#/c/61032/
[2] 
https://blueprints.launchpad.net/horizon/+spec/decrypt-and-display-vm-generated-password
[3] http://docs.openstack.org/user-guide/content/novaclient_commands.html





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-03-03 Thread Steve Gordon
- Original Message -
 Hi Steve,
 
 On Sat, 1 Mar 2014 16:14:19 -0500 (EST)
 Steve Gordon sgor...@redhat.com wrote:

[SNIP]

  My feeling both with my product hat and my upstream documentation
  contributor hat on knowing some of the issues we've had there in the
  past is that one release cycle of deprecation for v2 would not be
  enough notice for this kind of change, particularly if it's also a
  cycle where v3 is still being pretty actively worked on and a moving
  target for the rest of the ecosystem (which it looks like will be the
  case for Juno, if we continue in the direction of doing v3).
 
 Thank you to you and everyone else for their feedback around this. With
 the exception of XML deprecation I don't think anyone was considering a
 single cycle for deprecation for V2 would be reasonable. But numbers
 around how long would be reasonable would be helpful to guide as to
 what path we take.

Right, I just wanted to note that explicitly. It was pointed out elsewhere in 
the thread that decisions on when to mark v2 deprecated, and later when to 
actually remove it, should probably be based on readiness criteria rather than 
time-based which makes sense. I still think a time-based *minimum* period 
between deprecation and removal makes sense for the reasons I stated above - 
beyond that a criteria-based evaluation would still apply (I guess effectively 
I'm saying here that one of the criteria for v2 removal should be X cycles of 
deprecation for people to update consuming apps etc.).

Additional criteria I think are important are support for v3 from other 
OpenStack projects that interact with Nova via the API (including the clients), 
and support in the most common SDKs (this is I recognize a sticky one because 
then you are blocking on changes that aren't necessarily to OpenStack 
projects). These would seem obvious but don't seem to have been nailed in some 
previous API bumps within OpenStack.

 I would be interested in your opinion on the impact of a V2 version
 release which had backwards incompatibility in only one area - and
 that is input validation. So only apps/SDKs which are currently misusing
 the API (I think the most common problem would be sending extraneous
 data which is currently ignored) would be adversely affected. Other
 cases where where the API was used correctly would be unaffected.

 In this kind of scenario would we need to maintain the older V2 version
 where there is poor input validation as long? Or would the V2 version
 with strong input validation be sufficient?

This is a tricky one because people who have applications or SDKs that are 
misusing the API are unlikely to realize they are misusing it until you 
actually make the switch to stricter validation by default and they start 
getting errors back. We also don't as far as I know have data at a deep enough 
level on exactly how people are using the APIs to base a decision on - though 
perhaps some of the cloud providers running OpenStack might be able to help 
here. I think at best we'd be able to evaluate the common/well known SDKs to 
ensure they aren't guilty (and if they are, to fix it) to gain some level of 
confidence but suspect that breaking the API contract in this fashion would 
still warrant a similar period of deprecation from the point of view of 
distributors and cloud providers.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net wrote:
 Hi Joe,

 Thanks for your reply, I'll try to further explain.


 Le 03/03/2014 05:33, Joe Gordon a écrit :

 On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com wrote:

 Hello, folks!

 I'd like to request Climate project review for incubation. Here is
 official
 incubation application:

 https://wiki.openstack.org/wiki/Climate/Incubation

 I'm unclear on what Climate is trying to solve. I read the 'Detailed
 Description' from the link above, and it states Climate is trying to
 solve two uses cases (and the more generalized cases of those).

 1) Compute host reservation (when user with admin privileges can
 reserve hardware resources that are dedicated to the sole use of a
 tenant)
 2) Virtual machine (instance) reservation (when user may ask
 reservation service to provide him working VM not necessary now, but
 also in the future)

 Climate is born from the idea of dedicating compute resources to a single
 tenant or user for a certain amount of time, which was not yet implemented
 in Nova: how as an user, can I ask Nova for one compute host with certain
 specs to be exclusively allocated to my needs, starting in 2 days and being
 freed in 5 days ?

 Albeit the exclusive resource lock can be managed on the Nova side, there is
 currently no possibilities to ensure resource planner.

 Of course, and that's why we think Climate can also stand by its own
 Program, resource reservation can be seen on a more general way : what about
 reserving an Heat stack with its volume and network nested resources ?


 You want to support being able to reserve an instance in the future.
 As a cloud operator how do I take advantage of that information? As a
 cloud consumer, what is the benefit? Today OpenStack supports both
 uses cases, except it can't request an Instance for the future.


 Again, that's not only reserving an instance, but rather a complex mix of
 resources. At the moment, we do provide way to reserve virtual instances by
 shelving/unshelving them at the lease start, but we also give possibility to
 provide dedicated compute hosts. Considering it, the logic of resource
 allocation and scheduling (take the word as resource planner, in order not
 to confuse with Nova's scheduler concerns) and capacity planning is too big
 to fail under the Compute's umbrella, as it has been agreed within the
 Summit talks and periodical threads.

Capacity planning not falling under Compute's umbrella is news to me,
are you referring to Gantt and scheduling in general? Perhaps I don't
fully understand the full extent of what 'capacity planning' actually
is.


 From the user standpoint, there are multiple ways to integrate with Climate
 in order to get Capacity Planning capabilities. As you perhaps noticed, the
 workflow for reserving resources is different from one plugin to another.
 Either we say the user has to explicitly request for dedicated resources
 (using Climate CLI, see dedicate compute hosts allocation), or we implicitly
 integrate resource allocation from the Nova API (see virtual instance API
 hook).

I don't see how Climate reserves resources is relevant to the user.


 We truly accept our current implementation as a first prototype, where
 scheduling decisions can be improved (possibly thanks to some tight
 integration with a future external Scheduler aaS, hello Gantt), where also
 resource isolation and preemption must also be integrated with subprojects
 (we're currently seeing how to provision Cinder volumes and Neutron routers
 and nets), but anyway we still think there is a (IMHO big) room for resource
 and capacity management on its own project.

 Hoping it's clearer now,

Unfortunately that doesn't clarify things for me.

From the user's point of view what is the benefit from making a
reservation in the future? Versus what Nova supports today, asking for
an instance in the present.

Same thing from the operator's perspective,  what is the benefit of
taking reservations for the future?

This whole model is unclear to me because as far as I can tell no
other clouds out there support this model, so I have nothing to
compare it to.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.13.0 released

2014-03-03 Thread John Dickinson
I'm please to announce that OpenStack Swift 1.13 has been released.
This release has some important new features (highlighted below), and
it also serves as a good checkpoint before Swift's final release in
the Icehouse cycle.

Launchpad page for this release: https://launchpad.net/swift/icehouse/1.13.0

Highlights from the Swift 1.13 changelog
(https://github.com/openstack/swift/blob/master/CHANGELOG):

* Account-level ACLs and ACL format v2

  Accounts now have a new privileged header to represent ACLs or
  any other form of account-level access control. The value of
  the header is a JSON dictionary string to be interpreted by the
  auth system. A reference implementation is given in TempAuth.
  Please see the full docs at
  http://swift.openstack.org/overview_auth.html

* Moved all DLO functionality into middleware

  The proxy will automatically insert the dlo middleware at an
  appropriate place in the pipeline the same way it does with the
  gatekeeper middleware. Clusters will still support DLOs after upgrade
  even with an old config file that doesn't mention dlo at all.

* Remove python-swiftclient dependency

Please read the full changelog for a full report of this release. As
always, this release is considered production-ready, and deployers can
upgrade to it with no client downtime in their cluster.

I'd like to thank the following new contributors to Swift, each of
whom contributed to the project for the first time during this
release:

 - Luis de Bethencourt (l...@debethencourt.com)
 - Florent Flament (florent.flament-...@cloudwatt.com)
 - David Moreau Simard (dmsim...@iweb.com)
 - Shane Wang (shane.w...@intel.com)

Thank you to everyone who has contributed.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-03-03 Thread Andrew Laski

On 03/01/14 at 07:24am, Jay Lau wrote:

Hey,

Sorry to bring this up again. There are also some discussions here:
http://markmail.org/message/5zotly4qktaf34ei

You can also search [Runtime Policy] in your email list.

Not sure if we can put this to Gantt and enable Gantt provide both initial
placement and rum time polices like HA, load balance etc.


I don't have an opinion at the moment as to whether or not this sort of 
functionality belongs in Gantt, but there's still a long way to go just 
to get the scheduling functionality we want out of Gantt and I would 
like to see the focus stay on that.






Thanks,

Jay



2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:


On 02/20/2014 06:04 PM, Sean Dague wrote:
 On 02/20/2014 05:32 PM, Russell Bryant wrote:
 On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
 Hi,

 Would like to know if there's any interest on having
 'automatic evacuation' feature when a compute node goes down. I
 found 3 bps related to this topic: [1] Adding a periodic task
 and using ServiceGroup API for compute-node status [2] Using
 ceilometer to trigger the evacuate api. [3] Include some kind
 of H/A plugin  by using a 'resource optimization service'

 Most of those BP's have comments like 'this logic should not
 reside in nova', so that's why i am asking what should be the
 best approach to have something like that.

 Should this be ignored, and just rely on external monitoring
 tools to trigger the evacuation? There are complex scenarios
 that require lot of logic that won't fit into nova nor any
 other OS component. (For instance: sometimes it will be faster
 to reboot the node or compute-nova than starting the
 evacuation, but if it fail X times then trigger an evacuation,
 etc )

 Any thought/comment// about this?

 Regards Leandro

 [1]

https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken


[2]

https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically


[3]

https://blueprints.launchpad.net/nova/+spec/resource-optimization-service



My opinion is that I would like to see this logic done outside of Nova.

 Right now Nova is the only service that really understands the
 compute topology of hosts, though it's understanding of liveness is
 really not sufficient to handle this kind of HA thing anyway.

 I think that's the real problem to solve. How to provide
 notifications to somewhere outside of Nova on host death. And the
 question is, should Nova be involved in just that part, keeping
 track of node liveness and signaling up for someone else to deal
 with it? Honestly that part I'm more on the fence about. Because
 putting another service in place to just handle that monitoring
 seems overkill.

 I 100% agree that all the policy, reacting, logic for this should
 be outside of Nova. Be it Heat or somewhere else.

I think we agree.  I'm very interested in continuing to enhance Nova
to make sure that the thing outside of Nova has all of the APIs it
needs to get the job done.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Thanks,

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Anne Gentle
On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
 wrote:
  Hi Joe,
 
  Thanks for your reply, I'll try to further explain.
 
 
  Le 03/03/2014 05:33, Joe Gordon a écrit :
 
  On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
 wrote:
 
  Hello, folks!
 
  I'd like to request Climate project review for incubation. Here is
  official
  incubation application:
 
  https://wiki.openstack.org/wiki/Climate/Incubation
 
  I'm unclear on what Climate is trying to solve. I read the 'Detailed
  Description' from the link above, and it states Climate is trying to
  solve two uses cases (and the more generalized cases of those).
 
  1) Compute host reservation (when user with admin privileges can
  reserve hardware resources that are dedicated to the sole use of a
  tenant)
  2) Virtual machine (instance) reservation (when user may ask
  reservation service to provide him working VM not necessary now, but
  also in the future)
 
  Climate is born from the idea of dedicating compute resources to a single
  tenant or user for a certain amount of time, which was not yet
 implemented
  in Nova: how as an user, can I ask Nova for one compute host with certain
  specs to be exclusively allocated to my needs, starting in 2 days and
 being
  freed in 5 days ?
 
  Albeit the exclusive resource lock can be managed on the Nova side,
 there is
  currently no possibilities to ensure resource planner.
 
  Of course, and that's why we think Climate can also stand by its own
  Program, resource reservation can be seen on a more general way : what
 about
  reserving an Heat stack with its volume and network nested resources ?
 
 
  You want to support being able to reserve an instance in the future.
  As a cloud operator how do I take advantage of that information? As a
  cloud consumer, what is the benefit? Today OpenStack supports both
  uses cases, except it can't request an Instance for the future.
 
 
  Again, that's not only reserving an instance, but rather a complex mix of
  resources. At the moment, we do provide way to reserve virtual instances
 by
  shelving/unshelving them at the lease start, but we also give
 possibility to
  provide dedicated compute hosts. Considering it, the logic of resource
  allocation and scheduling (take the word as resource planner, in order
 not
  to confuse with Nova's scheduler concerns) and capacity planning is too
 big
  to fail under the Compute's umbrella, as it has been agreed within the
  Summit talks and periodical threads.

 Capacity planning not falling under Compute's umbrella is news to me,
 are you referring to Gantt and scheduling in general? Perhaps I don't
 fully understand the full extent of what 'capacity planning' actually
 is.

 
  From the user standpoint, there are multiple ways to integrate with
 Climate
  in order to get Capacity Planning capabilities. As you perhaps noticed,
 the
  workflow for reserving resources is different from one plugin to another.
  Either we say the user has to explicitly request for dedicated resources
  (using Climate CLI, see dedicate compute hosts allocation), or we
 implicitly
  integrate resource allocation from the Nova API (see virtual instance API
  hook).

 I don't see how Climate reserves resources is relevant to the user.

 
  We truly accept our current implementation as a first prototype, where
  scheduling decisions can be improved (possibly thanks to some tight
  integration with a future external Scheduler aaS, hello Gantt), where
 also
  resource isolation and preemption must also be integrated with
 subprojects
  (we're currently seeing how to provision Cinder volumes and Neutron
 routers
  and nets), but anyway we still think there is a (IMHO big) room for
 resource
  and capacity management on its own project.
 
  Hoping it's clearer now,

 Unfortunately that doesn't clarify things for me.

 From the user's point of view what is the benefit from making a
 reservation in the future? Versus what Nova supports today, asking for
 an instance in the present.

 Same thing from the operator's perspective,  what is the benefit of
 taking reservations for the future?

 This whole model is unclear to me because as far as I can tell no
 other clouds out there support this model, so I have nothing to
 compare it to.


Hi Joe,
I think it's meant to save consumers money by pricing instances based on
today's prices.

https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

Anne


   -Sylvain
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Jay Pipes
On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
 Having done some work with MySQL (specifically around similar data
 sets) and discussing the changes with some former coworkers (MySQL
 experts) I am inclined to believe the move from varchar  to binary
 absolutely would increase performance like this.
 
 
 However, I would like to get some real benchmarks around it and if it
 really makes a difference we should get a smart UUID type into the
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
 conversation. Should be split off from the keystone one at hand - I
 don't want valuable information / discussions to get lost.

No disagreement on either point. However, this should be done after the
standardization to a UUID user_id in Keystone, as a separate performance
improvement patch. Agree?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Toward SQLAlchemy 0.9.x compatibility everywhere for Icehouse

2014-03-03 Thread Thomas Goirand
On 03/03/2014 01:14 PM, Thomas Goirand wrote:
 On 03/03/2014 11:24 AM, Thomas Goirand wrote:
 It looks like my patch fixes the first unit test failure. Though we
 still need a fix for the 2nd problem:
 AttributeError: 'module' object has no attribute 'AbstractType'
 
 Replying to myself...
 
 It looks like AbstractType is not needed except for backwards
 compatibility in SQLA 0.7  0.8, and it's gone away in 0.9. See:
 
 http://docs.sqlalchemy.org/en/rel_0_7/core/types.html
 http://docs.sqlalchemy.org/en/rel_0_8/core/types.html
 http://docs.sqlalchemy.org/en/rel_0_9/core/types.html
 
 (reference to AbstractType is gone from the 0.9 doc)
 
 Therefore, I'm tempted to just remove lines 336 and 337, though I am
 unsure of what was intended in this piece of code.
 
 Your thoughts?
 
 Thomas

Seems Sean already fixed that one, and it was lost in the git review
process (with patches going back and forth). I added it again as a
separate patch, and now the unit tests are now ok. It just passed the
gating tests! :)

Cheers, and thanks to Sean and everyone else for the help, hoping to get
this series approved soon,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Sean Dague
On 03/02/2014 02:32 PM, Dina Belova wrote:
 Hello, folks!
 
 I'd like to request Climate project review for incubation. Here is
 official incubation application:
 
 https://wiki.openstack.org/wiki/Climate/Incubation
 
 Additionally due to the project scope and the roadmap, we don't see any
 currently existing OpenStack program that fits Climate. So we've
 prepared new program request too:
 
 https://wiki.openstack.org/wiki/Climate/Program
 
 TL;DR
 
 Our team is working on providing OpenStack with resource reservation
 opportunity in time-based manner, including close integration with all
 other OS projects.
 
 As Climate initiative is targeting to provide not only compute resources
 revervation, but also volumes, network resources, storage nodes
 reservation opportunity, we consider it is not about being a part of
 some existing OpenStack program.
 
 This initiative needs to become a part of completely new Resource
 Reservation Program, that aims to implement time-based cloud resource
 management.

At a high level this feels like this should be part of scheduling.
Scheduling might include resources you want right now, but it could
include resources you want in the future. It also makes sense for
scheduling to include deadlines, so that resources are reclaimed when
they expire. Especially given quota implications of asking for resources
now vs. resources that a user has reserved in the future. What happens
when a user has used all their quota in the present, and a future
reservation comes up for access?

Scheduling today is under compute. The proposal to pull Gantt out still
leaves it in the compute program. So I would naturally assume this
should live under compute. I could understand that if this  Gantt
emerged together and wanted to create a Resource Allocation program,
that might make sense. But I think that's way down the road.

However, that's just a quick look at this. I think it will be an
interesting discussion in how this effort moves forward.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-03-03 Thread laserjetyang
there are a lot of rules for HA or LB, so I think it might be a better idea
to scope the framework and leave the policy as plugins.


On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski andrew.la...@rackspace.comwrote:

 On 03/01/14 at 07:24am, Jay Lau wrote:

 Hey,

 Sorry to bring this up again. There are also some discussions here:
 http://markmail.org/message/5zotly4qktaf34ei

 You can also search [Runtime Policy] in your email list.

 Not sure if we can put this to Gantt and enable Gantt provide both initial
 placement and rum time polices like HA, load balance etc.


 I don't have an opinion at the moment as to whether or not this sort of
 functionality belongs in Gantt, but there's still a long way to go just to
 get the scheduling functionality we want out of Gantt and I would like to
 see the focus stay on that.





 Thanks,

 Jay



 2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:

  On 02/20/2014 06:04 PM, Sean Dague wrote:
  On 02/20/2014 05:32 PM, Russell Bryant wrote:
  On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
  Hi,
 
  Would like to know if there's any interest on having
  'automatic evacuation' feature when a compute node goes down. I
  found 3 bps related to this topic: [1] Adding a periodic task
  and using ServiceGroup API for compute-node status [2] Using
  ceilometer to trigger the evacuate api. [3] Include some kind
  of H/A plugin  by using a 'resource optimization service'
 
  Most of those BP's have comments like 'this logic should not
  reside in nova', so that's why i am asking what should be the
  best approach to have something like that.
 
  Should this be ignored, and just rely on external monitoring
  tools to trigger the evacuation? There are complex scenarios
  that require lot of logic that won't fit into nova nor any
  other OS component. (For instance: sometimes it will be faster
  to reboot the node or compute-nova than starting the
  evacuation, but if it fail X times then trigger an evacuation,
  etc )
 
  Any thought/comment// about this?
 
  Regards Leandro
 
  [1]
 
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
 
 [2]
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-
 instance-automatically
 
 
 [3]
 
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 
 
 
 My opinion is that I would like to see this logic done outside of Nova.
 
  Right now Nova is the only service that really understands the
  compute topology of hosts, though it's understanding of liveness is
  really not sufficient to handle this kind of HA thing anyway.
 
  I think that's the real problem to solve. How to provide
  notifications to somewhere outside of Nova on host death. And the
  question is, should Nova be involved in just that part, keeping
  track of node liveness and signaling up for someone else to deal
  with it? Honestly that part I'm more on the fence about. Because
  putting another service in place to just handle that monitoring
  seems overkill.
 
  I 100% agree that all the policy, reacting, logic for this should
  be outside of Nova. Be it Heat or somewhere else.

 I think we agree.  I'm very interested in continuing to enhance Nova
 to make sure that the thing outside of Nova has all of the APIs it
 needs to get the job done.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Toward SQLAlchemy 0.9.x compatibility everywhere for Icehouse

2014-03-03 Thread David Ripton

On 03/02/2014 03:12 AM, Thomas Goirand wrote:

On 03/02/2014 12:03 PM, Thomas Goirand wrote:



I hope to get support from core reviewers here, so that we can fix-up
the SQLA 0.9.x compat ASAP, preferably before b3. Of course, I'll make
sure all we do works with both 0.8 and 0.9 version of SQLA. Is there
anyone still running with the old 0.7? If yes, then we can try to
continue validating OpenStack against it as well.


I think we should continue supporting SQLAlchemy 0.7 for a bit longer, 
as that's what some distributions still package.  (Of course it's 
possible to work around that by pulling from pip instead, but not 
everyone will.)



FYI, the below patches need review:

SQLAlchemy-migrate:
https://review.openstack.org/#/c/77387/
https://review.openstack.org/#/c/77388/
https://review.openstack.org/#/c/77396/
https://review.openstack.org/#/c/77397/


I'll review all these today.

BTW, I just pushed sqlalchemy-migrate 0.8.4 with DB2 support.  I agree 
that we should push a 0.9 as soon as all the SQLAlchemy-0.9 
compatibility stuff is in and working.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra][oslo][db]Sync notifier module from oslo-incubator met django DatabseError

2014-03-03 Thread Ben Nemec
 

On 2014-02-28 20:26, jackychen wrote: 

 hi, 
 I have commit a patch to sync notifier module under horizon with 
 oslo-incubator, I met the gate-horizon-python error, all the errors are aimed 
 at DatabseError.
 Code Review Link: https://review.openstack.org/#/c/76439/ [1]
 
 The specific error: django.db.utils.DatabaseError: DatabaseWrapper objects 
 created in a thread can only be used in that same thread. The object with 
 alias 'default' was created in thread id 140664492599040 and this is thread 
 id 56616752.
 
 So I have google it and find that there are two ways to fix this
 
 1. https://code.djangoproject.com/ticket/17998 [2]
 import eventlet
 eventlet.monkey_patch()
 
 2.https://bitbucket.org/akoha/django-digest/issue/10/conflict-with-global-databasewrapper
 replace
 cursor = self.db.connection.cursor()
 with
 cursor = db.connections[DEFAULT_DB_ALIAS].cursor()
 everywhere it appears in storage.py, and add to the imports:
 from django import db from django.db.utils import DEFAULT_DB_ALIAS
 
 Anyway, all these two solution are all not can be handled in my code commit, 
 So do you have any point of view to make this work?
 Thanks for all your help.

I believe that in other projects we monkeypatch eventlet, but I don't
know for sure if that's the appropriate fix here. Adding oslo and db
tags so those people will see this too.

-Ben

 

Links:
--
[1] https://review.openstack.org/#/c/76439/
[2] https://code.djangoproject.com/ticket/17998
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-03-03 Thread Chris Friesen

On 03/03/2014 08:14 AM, Steve Gordon wrote:


I would be interested in your opinion on the impact of a V2
version release which had backwards incompatibility in only one
area - and that is input validation. So only apps/SDKs which are
currently misusing the API (I think the most common problem would
be sending extraneous data which is currently ignored) would be
adversely affected. Other cases where where the API was used
correctly would be unaffected.

In this kind of scenario would we need to maintain the older V2
version where there is poor input validation as long? Or would the
V2 version with strong input validation be sufficient?


This is a tricky one because people who have applications or SDKs
that are misusing the API are unlikely to realize they are misusing
it until you actually make the switch to stricter validation by
default and they start getting errors back. We also don't as far as I
know have data at a deep enough level on exactly how people are using
the APIs to base a decision on - though perhaps some of the cloud
providers running OpenStack might be able to help here. I think at
best we'd be able to evaluate the common/well known SDKs to ensure
they aren't guilty (and if they are, to fix it) to gain some level of
confidence but suspect that breaking the API contract in this fashion
would still warrant a similar period of deprecation from the point of
view of distributors and cloud providers.



Would there be any point in doing it incrementally?  Maybe initially the 
command would still be accepted but it would print an error log 
indicating that the command contained extraneous data.


That way someone could monitor the logs and see how much extraneous data 
is being injected, but also contact the owner of the software and notify 
them of the problem.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-03-03 Thread Jay Lau
Yes, it would be great if we can have a simple framework for future run
time policy plugins. ;-)

2014-03-03 23:12 GMT+08:00 laserjetyang laserjety...@gmail.com:

 there are a lot of rules for HA or LB, so I think it might be a better
 idea to scope the framework and leave the policy as plugins.


 On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski 
 andrew.la...@rackspace.comwrote:

 On 03/01/14 at 07:24am, Jay Lau wrote:

 Hey,

 Sorry to bring this up again. There are also some discussions here:
 http://markmail.org/message/5zotly4qktaf34ei

 You can also search [Runtime Policy] in your email list.

 Not sure if we can put this to Gantt and enable Gantt provide both
 initial
 placement and rum time polices like HA, load balance etc.


 I don't have an opinion at the moment as to whether or not this sort of
 functionality belongs in Gantt, but there's still a long way to go just to
 get the scheduling functionality we want out of Gantt and I would like to
 see the focus stay on that.





 Thanks,

 Jay



 2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:

  On 02/20/2014 06:04 PM, Sean Dague wrote:
  On 02/20/2014 05:32 PM, Russell Bryant wrote:
  On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
  Hi,
 
  Would like to know if there's any interest on having
  'automatic evacuation' feature when a compute node goes down. I
  found 3 bps related to this topic: [1] Adding a periodic task
  and using ServiceGroup API for compute-node status [2] Using
  ceilometer to trigger the evacuate api. [3] Include some kind
  of H/A plugin  by using a 'resource optimization service'
 
  Most of those BP's have comments like 'this logic should not
  reside in nova', so that's why i am asking what should be the
  best approach to have something like that.
 
  Should this be ignored, and just rely on external monitoring
  tools to trigger the evacuation? There are complex scenarios
  that require lot of logic that won't fit into nova nor any
  other OS component. (For instance: sometimes it will be faster
  to reboot the node or compute-nova than starting the
  evacuation, but if it fail X times then trigger an evacuation,
  etc )
 
  Any thought/comment// about this?
 
  Regards Leandro
 
  [1]
 
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
 
 [2]
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-
 instance-automatically
 
 
 [3]
 
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 
 
 
 My opinion is that I would like to see this logic done outside of Nova.
 
  Right now Nova is the only service that really understands the
  compute topology of hosts, though it's understanding of liveness is
  really not sufficient to handle this kind of HA thing anyway.
 
  I think that's the real problem to solve. How to provide
  notifications to somewhere outside of Nova on host death. And the
  question is, should Nova be involved in just that part, keeping
  track of node liveness and signaling up for someone else to deal
  with it? Honestly that part I'm more on the fence about. Because
  putting another service in place to just handle that monitoring
  seems overkill.
 
  I 100% agree that all the policy, reacting, logic for this should
  be outside of Nova. Be it Heat or somewhere else.

 I think we agree.  I'm very interested in continuing to enhance Nova
 to make sure that the thing outside of Nova has all of the APIs it
 needs to get the job done.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 'if foo' vs 'if foo is not None'

2014-03-03 Thread Andrew Laski

On 03/03/14 at 03:00pm, Matthew Booth wrote:

PEP 8, under 'Programming Recommendations' recommends against implicit
comparison to None. This isn't just stylistic, either: we were actually
bitten by it in the VMware driver
(https://bugs.launchpad.net/nova/+bug/1262288). The bug was hard to
spot, and had consequences to resource usage.

However, implicit comparison to None seems to be the default in Nova. Do
I give up mentioning this in reviews, or is this something we care about?


From what I've seen I don't think it's implicit comparison to None so 
much as it's considering None and things like [] to be equal.  Because 
there is a risk of bugs, and I've seen others beyond the one linked 
above, I think it's worth pointing out in a review.  But the context has 
to be taken into account as sometimes it's legitimate to consider None 
and [] to be equal.




Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Dina Belova
 Capacity planning not falling under Compute's umbrella is news to me,

 are you referring to Gantt and scheduling in general? Perhaps I don't

 fully understand the full extent of what 'capacity planning' actually

 is.

We intend that 'capacity' means not only 'compute capacity', but any kind
of resources capacity - we may plan not only compute resources usage, but
also storage, etc. Gantt is about 'where to schedule' potentially any kind
of resources, Climate is about 'when'. Term 'scheduling' in Climate sphere
has no reference to common OS scheduling process, here we mean something
like 'timetable', that's much closer. Capacity planning here is ability to
predict the fact of resource usage in future (for cloud providers), and
provide these resources to users.

 I don't see how Climate reserves resources is relevant to the user.

 From the user's point of view what is the benefit from making a

 reservation in the future? Versus what Nova supports today, asking for

 an instance in the present.

 Same thing from the operator's perspective,  what is the benefit of

 taking reservations for the future?

Joe, I'll describe two original use cases, that were the reason for Climate
to be created. I suppose it might give idea of what we were thinking about
while its implementation.



Host reservation use case



This use case was born to fit XLcloud (http://www.xlcloud.org) project.
Needs of this project require usage of whole hosts compute capacities
without any other VMs running on them - and to create some kind of hosts
usage timetable. The last thing is needed to implement energy efficiency
for OS cloud, where these High Performance Cloud Computing jobs will be
running on. That will allow to keep unused hosts in 'shut down' state and
to wake them up where they'll be really needed.

Steps for this use case may be described in the following way:

1) admin user marks some hosts from common pool as 'reservable' - these
hosts then will be given to user who wants to use their whole capacity and
will be 'isolated' to the solely use of Climate until they are freed

2) if user wants to use some of these hosts, he/she asks Climate to provide
some amount of these hosts with any wanted specs - like amount of RAM, CPU,
etc.

3) Climate remembers that these hosts will be used in some time for some
amount of time

4) when lease (contract between user and Climate) will start, user may use
these hosts to run VMs on them

In future we'll implement energy efficiency, that will completely fit this
first use case.

===

VM reservation use case

===

This use case was created as a result of automated virtual resource
allocation/reclaiming idea for dev clouds.

If company has dev cloud for its developers, it'll be great if there won't
be created and forgotten VMs running on - for this, process of VMs
booting/shutting them down should be automated and processed by some
independent service.

Here was born idea of virtual reservations.

Some usual workflow for this use case looks like:

1) user knows that in future he'll need lab with some VMs in it and he asks
Climate to reserve them (now we've implemented one VM reservation)

2) Climate remembers that this VM will be used in some time for some amount
of time (now we've implemented plugins that keeps these VMs in shelved
state while they are not used)

3) when lease starts, Climate wakes this VM and it's completely ready to be
used

As we've discussed with Cafe project folks (
https://wiki.openstack.org/wiki/Cafe) - this ability will be helpful for
academic usage, where students also need to have automatically
allocated/reclaimed virtual resources.

=

As said, these two use cases were original ones, and now we're thinking
about more general view - any type of physical/virtual resource to be
managed in time. For example, we're planning to implement complex virtual
resource reservations (like Heat stacks), that will allow
developers/students to have working small OS clouds by the time they'll
need them.

Also Climate will be helpful in some way, close to mentioned by Anne -
usage of Climate+billing_service will allow to sell cloud resources to end
users in the following manner:

- if you're reserving resources far before you'll need them, it'll be
cheaper

- if you're reserving resources just about small amount of time before
you'll need them, it'll be more expensive

That's not really AWS use case, where reserved resources are not guarantied
to user, but it might be un-guarantied type of reservation, that will be
much cheaper like:

- if you're reserving cloud resources without any guarantees you'll have
them, they will be much cheaper than guaranteed ones.

Thanks
Dina


On Mon, Mar 3, 2014 at 6:27 PM, Anne Gentle a...@openstack.org wrote:



 On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
 

[openstack-dev] [Third Party CI] Reminder: Workshop/QA meeting on #openstack-meeting today at 13:00EST/18:00 UTC

2014-03-03 Thread Jay Pipes
Hi all,

Just a friendly reminder we will have a workshop and QA meeting on
Freeonde #openstack-meeting channel today at 18:00 UTC (13:00 EST) for
folks interested in setting up or debugging third party CI platforms.

See you there!

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-03 Thread Mark Collier

+1

Sent with AquaMail for Android
http://www.aqua-mail.com


On March 3, 2014 10:12:43 AM Jay Pipes jaypi...@gmail.com wrote:


On Sat, 2014-03-01 at 16:30 -0700, John Griffith wrote:
 Hey,
 I just wanted to send out a quick note on a topic that came up
 recently.  Unfortunately the folks that I'd like to read this most;
 don't participate on the ML typically, but I'd at least like to raise
 some community awareness.
 We all know OpenStack is growing at a rapid pace and has a lot of
 promise, so much so that there's an enormous field of vendors and OS
 distributions that are focusing a lot of effort and marketing on the
 project.
 Something that came up recently in the Cinder project is that one of
 the backend device vendors wasn't happy with a feature that somebody
 was working on and contributed a patch for.  Instead of providing a
 meaningful review and suggesting alternatives to the patch they set up
 meetings with other vendors leaving the active members of the
 community out and picked things apart in their own format out of the
 public view.  Nobody from the core Cinder team was involved in these
 discussions or meetings (at least that I've been made aware of).
 I don't want to go into detail about who, what, where etc at this
 point.  I instead, I want to point out that in my opinion this is no
 way to operate in an Open Source community.  Collaboration is one
 thing, but ambushing other peoples work is entirely unacceptable in my
 opinion.  OpenStack provides a plethora of ways to participate and
 voice your opinion, whether it be this mailing list, the IRC channels
 which are monitored daily and also host a published weekly meeting for
 most projects.  Of course when in doubt you're welcome to send me an
 email at any time with questions or concerns that you have about a
 patch.  In any case however the proper way to address concerns about a
 submitted patch is to provide a review for that patch.
 Everybody has a voice and the ability to participate, and the most
 effective way to do that is by thorough, timely and constructive code
 reviews.
 I'd also like to point out that while a number of companies and
 vendors have fancy taglines like The Leaders of OpenStack, they're
 not.  OpenStack is a community effort, as of right now there is no
 company that leads or runs OpenStack.  If you have issues or concerns
 on the development side you need to take those up with the development
 community, not vendor xyz.

+100

And another +1 for use of the word plethora.

I will point out -- not knowing who these actors were -- that sometimes
it is tough for some folks to adapt to open community methodologies and
open discussions. Some people simply don't know any other way of
resolving differences other than to work in private or develop what they
consider to be consensus between favored parties in order to drive
change by bullying. We must, as a community, both make it clear
(through posts such as this) that this behavior is antithetical to how
the OpenStack community functions, and also provide these individuals
with as much assistance as possible in changing their long-practiced
habits. Some stick. Some carrot.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Climate Incubation Application

2014-03-03 Thread Dina Belova
Hello, Sean

I think your idea is really interesting. I mean, that thought Gantt -
where to schedule, Climate - when to schedule is quite understandable and
good looking.

These two 'directions' of scheduling process really look like fitting into
one Program - probably it should be named Resource Allocation and
Reclaiming or something like this. My only idea is that even Gantt +
Climate will be under this new Program umbrella, they can't be one project
only because these functionalities may become wider - and projects may
become too big to manage them as one. I mean, that future energy efficiency
+ probable billing interest for Climate is not 'horizontal' scheduling
really, and that's why our scopes lay in different areas.

High level idea Gantt is for placing objects, Climate is for planning
them looks good IMHO.

And, as said, Compute program is not about Climate, because we're
implementing volumes reservation for 0.2.0 release, and that will be
pushing us away from the Compute Pr.

Anyway, it's an interesting discussion and I'm looking forward to
continuing it.

Thanks!


On Mon, Mar 3, 2014 at 7:04 PM, Sean Dague s...@dague.net wrote:

 On 03/02/2014 02:32 PM, Dina Belova wrote:
  Hello, folks!
 
  I'd like to request Climate project review for incubation. Here is
  official incubation application:
 
  https://wiki.openstack.org/wiki/Climate/Incubation
 
  Additionally due to the project scope and the roadmap, we don't see any
  currently existing OpenStack program that fits Climate. So we've
  prepared new program request too:
 
  https://wiki.openstack.org/wiki/Climate/Program
 
  TL;DR
 
  Our team is working on providing OpenStack with resource reservation
  opportunity in time-based manner, including close integration with all
  other OS projects.
 
  As Climate initiative is targeting to provide not only compute resources
  revervation, but also volumes, network resources, storage nodes
  reservation opportunity, we consider it is not about being a part of
  some existing OpenStack program.
 
  This initiative needs to become a part of completely new Resource
  Reservation Program, that aims to implement time-based cloud resource
  management.

 At a high level this feels like this should be part of scheduling.
 Scheduling might include resources you want right now, but it could
 include resources you want in the future. It also makes sense for
 scheduling to include deadlines, so that resources are reclaimed when
 they expire. Especially given quota implications of asking for resources
 now vs. resources that a user has reserved in the future. What happens
 when a user has used all their quota in the present, and a future
 reservation comes up for access?

 Scheduling today is under compute. The proposal to pull Gantt out still
 leaves it in the compute program. So I would naturally assume this
 should live under compute. I could understand that if this  Gantt
 emerged together and wanted to create a Resource Allocation program,
 that might make sense. But I think that's way down the road.

 However, that's just a quick look at this. I think it will be an
 interesting discussion in how this effort moves forward.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-TC mailing list
 openstack...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-03 Thread Collins, Sean
On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
 Currently, only security group rule direction, protocol, ethertype and port
 range are supported by neutron security group rule data structure. To allow

If I am not mistaken, I believe that when you use the ICMP protocol
type, you can use the port range specs to limit the type.

https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309

http://i.imgur.com/3n858Pf.png

I assume we just have to check and see if it applies to ICMPv6?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] New Subnet options editable?

2014-03-03 Thread Collins, Sean
On Mon, Mar 03, 2014 at 10:51:59AM +0800, Xuhan Peng wrote:
 Abishek,
 
 The two attributes are editable if you look at Sean's patch
 https://review.openstack.org/#/c/52983/27/neutron/api/v2/attributes.py. The
 allow_put is set to be True for these two attributes.
 
 Xuhan

+1 - the attributes can be updated after creation. We may need to
examine our patches to make sure that dnsmasq is restarted correctly
when these attributes are updated.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Third Party CI] Reminder: Workshop/QA meeting on #openstack-meeting today at 13:00EST/18:00 UTC

2014-03-03 Thread trinath.soman...@freescale.com
Hi Jay-

Thank you for the reminder. 

Waiting to meet you at the IRC webchat.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, March 03, 2014 9:47 PM
To: OpenStack Development Mailing List; openstack-infra
Subject: [OpenStack-Infra] [Third Party CI] Reminder: Workshop/QA meeting on 
#openstack-meeting today at 13:00EST/18:00 UTC

Hi all,

Just a friendly reminder we will have a workshop and QA meeting on Freeonde 
#openstack-meeting channel today at 18:00 UTC (13:00 EST) for folks interested 
in setting up or debugging third party CI platforms.

See you there!

Best,
-jay


___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 03/03/2014

2014-03-03 Thread Renat Akhmerov
Thanks for joining us today #openstack-meeting for our weekly community meeting.

As always, meeting minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-03-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-03-16.00.log.html

I’ll see you next time!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-03 Thread Nikolay Makhotkin
Hi, team!

Please look at the commit .

Module 'mistral/model' now is responsible for object model representation
which is used for accessing properties of actions, tasks etc.

We have a name problem - looks like we should rename module 'mistral/model'
since we have DB models and they are absolutely different.


Thoughts?


Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Sylvain Bauza
Hi Sean,




2014-03-03 16:04 GMT+01:00 Sean Dague s...@dague.net:



 At a high level this feels like this should be part of scheduling.
 Scheduling might include resources you want right now, but it could
 include resources you want in the future. It also makes sense for
 scheduling to include deadlines, so that resources are reclaimed when
 they expire. Especially given quota implications of asking for resources
 now vs. resources that a user has reserved in the future. What happens
 when a user has used all their quota in the present, and a future
 reservation comes up for access?

 Scheduling today is under compute. The proposal to pull Gantt out still
 leaves it in the compute program. So I would naturally assume this
 should live under compute. I could understand that if this  Gantt
 emerged together and wanted to create a Resource Allocation program,
 that might make sense. But I think that's way down the road.

 However, that's just a quick look at this. I think it will be an
 interesting discussion in how this effort moves forward.

 -Sean



I'm also particularly conviced that Gantt and Climate can work together for
scheduling purposes, and that point has also been raised by other people
during Gantt meetings, as you would find if you look at the logs.

Climate raised the need of having a external scheduler for planning
purposes during last design Summit, and I'm personnally following Gantt
efforts as I think it would be worth contributing to it for our purposes.
That said, I'm also been surprised that people within Gantt meetings raised
the point of using Climate for scheduling decisions (see also threads about
it with Solver Scheduler, for instance).

So, when seeing some areas of mutual work in between Gantt and Climate, I
can't say more. That said, albeit I'm thinking there could be some tight
interactions between those 2 projects, they both stand by their own and
address different needs : Gantt for what and Climate for when, as Dina
said.

Gantt is still in the early phase for incubation, still needing some Nova
changes before standing by itself so I don't think they can't apply yet for
a Program. That said, that's maybe a good opportunity to discuss about this
Program, in particular about its mission statement, but again, it sounds
like a good fit.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday March 4th at 19:00 UTC

2014-03-03 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday March 4th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 'if foo' vs 'if foo is not None'

2014-03-03 Thread John Dennis
On 03/03/2014 10:00 AM, Matthew Booth wrote:
 PEP 8, under 'Programming Recommendations' recommends against implicit
 comparison to None. This isn't just stylistic, either: we were actually
 bitten by it in the VMware driver
 (https://bugs.launchpad.net/nova/+bug/1262288). The bug was hard to
 spot, and had consequences to resource usage.
 
 However, implicit comparison to None seems to be the default in Nova. Do
 I give up mentioning this in reviews, or is this something we care about?

IMHO testing for None is correct provided you've been disciplined in the
rest of your coding. None has a very specific meaning, it not just a
truth value [1]. My interpretation of None is None represents the
Undefined Value [2] which is semantically different than values which
evaluate to False (i.e. False, 0, empty sequence).

[1] http://docs.python.org/2/library/stdtypes.html#truth-value-testing

[2] Technically None is not an undefined value, however common
programming paradigms ascribe the undefined semantic to None.
-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-03 Thread Luke Gorrie
On 3 March 2014 11:27, Thierry Carrez thie...@openstack.org wrote:

 It will certainly hurt the first one we nail on the wall. So here is one
 reputational pressure: you don't want to be that company.

 [1] http://fnords.wordpress.com/2014/02/24/the-dilemma-of-open-innovation/


-1.

That's a really harsh threat being made against a really vaguely defined
group.

I don't want to have to read between the lines on threats posted to
openstack-dev to see if my reputation will be in tatters in the morning.

This spoiled my day today, and I have nothing to do with whatever incident
you guys with privileged information are talking about.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread John Griffith
On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.comwrote:

 Hi, stackers:

 Libvirt/qemu have supported online-extend for multiple disk
 formats, including qcow2, sparse, etc. But Cinder only support
 offline-extend volumes currently.

 Offline-extend volume will force the instance to be shutoff or the
 volume to be detached. I think it will be useful if we introduce the
 online-extend feature to cinder, especially for the file system based
 driver, e.g. nfs, glusterfs, etc.

 Is there any other suggestions?

 Thanks.


 --
 zhangleiqiang

 Best Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Zhangleiqiang,

So yes, there's a rough BP for this here: [1], and some of the folks from
the Trove team (pdmars on IRC) have actually started to dive into this.
 Last I checked with him there were some sticking points on the Nova side
but we should synch up with Paul, it's been a couple weeks since I've last
caught up with him.

Thanks,
John
[1]:
https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 6:27 AM, Anne Gentle a...@openstack.org wrote:


 On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
 wrote:
  Hi Joe,
 
  Thanks for your reply, I'll try to further explain.
 
 
  Le 03/03/2014 05:33, Joe Gordon a écrit :
 
  On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
  wrote:
 
  Hello, folks!
 
  I'd like to request Climate project review for incubation. Here is
  official
  incubation application:
 
  https://wiki.openstack.org/wiki/Climate/Incubation
 
  I'm unclear on what Climate is trying to solve. I read the 'Detailed
  Description' from the link above, and it states Climate is trying to
  solve two uses cases (and the more generalized cases of those).
 
  1) Compute host reservation (when user with admin privileges can
  reserve hardware resources that are dedicated to the sole use of a
  tenant)
  2) Virtual machine (instance) reservation (when user may ask
  reservation service to provide him working VM not necessary now, but
  also in the future)
 
  Climate is born from the idea of dedicating compute resources to a
  single
  tenant or user for a certain amount of time, which was not yet
  implemented
  in Nova: how as an user, can I ask Nova for one compute host with
  certain
  specs to be exclusively allocated to my needs, starting in 2 days and
  being
  freed in 5 days ?
 
  Albeit the exclusive resource lock can be managed on the Nova side,
  there is
  currently no possibilities to ensure resource planner.
 
  Of course, and that's why we think Climate can also stand by its own
  Program, resource reservation can be seen on a more general way : what
  about
  reserving an Heat stack with its volume and network nested resources ?
 
 
  You want to support being able to reserve an instance in the future.
  As a cloud operator how do I take advantage of that information? As a
  cloud consumer, what is the benefit? Today OpenStack supports both
  uses cases, except it can't request an Instance for the future.
 
 
  Again, that's not only reserving an instance, but rather a complex mix
  of
  resources. At the moment, we do provide way to reserve virtual instances
  by
  shelving/unshelving them at the lease start, but we also give
  possibility to
  provide dedicated compute hosts. Considering it, the logic of resource
  allocation and scheduling (take the word as resource planner, in order
  not
  to confuse with Nova's scheduler concerns) and capacity planning is too
  big
  to fail under the Compute's umbrella, as it has been agreed within the
  Summit talks and periodical threads.

 Capacity planning not falling under Compute's umbrella is news to me,
 are you referring to Gantt and scheduling in general? Perhaps I don't
 fully understand the full extent of what 'capacity planning' actually
 is.

 
  From the user standpoint, there are multiple ways to integrate with
  Climate
  in order to get Capacity Planning capabilities. As you perhaps noticed,
  the
  workflow for reserving resources is different from one plugin to
  another.
  Either we say the user has to explicitly request for dedicated resources
  (using Climate CLI, see dedicate compute hosts allocation), or we
  implicitly
  integrate resource allocation from the Nova API (see virtual instance
  API
  hook).

 I don't see how Climate reserves resources is relevant to the user.

 
  We truly accept our current implementation as a first prototype, where
  scheduling decisions can be improved (possibly thanks to some tight
  integration with a future external Scheduler aaS, hello Gantt), where
  also
  resource isolation and preemption must also be integrated with
  subprojects
  (we're currently seeing how to provision Cinder volumes and Neutron
  routers
  and nets), but anyway we still think there is a (IMHO big) room for
  resource
  and capacity management on its own project.
 
  Hoping it's clearer now,

 Unfortunately that doesn't clarify things for me.

 From the user's point of view what is the benefit from making a
 reservation in the future? Versus what Nova supports today, asking for
 an instance in the present.

 Same thing from the operator's perspective,  what is the benefit of
 taking reservations for the future?

 This whole model is unclear to me because as far as I can tell no
 other clouds out there support this model, so I have nothing to
 compare it to.


 Hi Joe,
 I think it's meant to save consumers money by pricing instances based on
 today's prices.

 https://aws.amazon.com/ec2/purchasing-options/reserved-instances/


The reserved concept in Amazon, is very different then the one
proposed here. The amazon concept doesn't support saying I will need
an instance in 3 days, this is trying to support that use case.
Furthermore  I am not sure how the climate proposal would allow a
cloud provider to offer a cheaper offering.


 Anne


  -Sylvain
 
 
  

Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Dina Belova
Joe, as said, Amazon reservation is not like implemented in Climate - and
really we had different original use cases to have the same result. Amazon
instances reservations do not guarantee that instance will be provided to
user, as in Climate we started implemented reservations possibilities with
this guarantee (due to original use cases). That's why we're mostly
speaking about time-based resource management now, not billing purposes.

Lease creation request now contains the following steps: create lease -
start lease - end lease
Also there are user notifications, but they are connected with lease
start/end events, so that's not some separated stuff now.

Although, if we'll implement one more second step like 'allocate resources'
- that will allow us to have reservations with no guarantees, and that will
make Climate possibilities containing Amazon use case.

Thanks


On Mon, Mar 3, 2014 at 9:04 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 6:27 AM, Anne Gentle a...@openstack.org wrote:
 
 
  On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
  wrote:
   Hi Joe,
  
   Thanks for your reply, I'll try to further explain.
  
  
   Le 03/03/2014 05:33, Joe Gordon a écrit :
  
   On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
   wrote:
  
   Hello, folks!
  
   I'd like to request Climate project review for incubation. Here is
   official
   incubation application:
  
   https://wiki.openstack.org/wiki/Climate/Incubation
  
   I'm unclear on what Climate is trying to solve. I read the 'Detailed
   Description' from the link above, and it states Climate is trying to
   solve two uses cases (and the more generalized cases of those).
  
   1) Compute host reservation (when user with admin privileges can
   reserve hardware resources that are dedicated to the sole use of a
   tenant)
   2) Virtual machine (instance) reservation (when user may ask
   reservation service to provide him working VM not necessary now, but
   also in the future)
  
   Climate is born from the idea of dedicating compute resources to a
   single
   tenant or user for a certain amount of time, which was not yet
   implemented
   in Nova: how as an user, can I ask Nova for one compute host with
   certain
   specs to be exclusively allocated to my needs, starting in 2 days and
   being
   freed in 5 days ?
  
   Albeit the exclusive resource lock can be managed on the Nova side,
   there is
   currently no possibilities to ensure resource planner.
  
   Of course, and that's why we think Climate can also stand by its own
   Program, resource reservation can be seen on a more general way : what
   about
   reserving an Heat stack with its volume and network nested resources ?
  
  
   You want to support being able to reserve an instance in the future.
   As a cloud operator how do I take advantage of that information? As a
   cloud consumer, what is the benefit? Today OpenStack supports both
   uses cases, except it can't request an Instance for the future.
  
  
   Again, that's not only reserving an instance, but rather a complex mix
   of
   resources. At the moment, we do provide way to reserve virtual
 instances
   by
   shelving/unshelving them at the lease start, but we also give
   possibility to
   provide dedicated compute hosts. Considering it, the logic of resource
   allocation and scheduling (take the word as resource planner, in order
   not
   to confuse with Nova's scheduler concerns) and capacity planning is
 too
   big
   to fail under the Compute's umbrella, as it has been agreed within the
   Summit talks and periodical threads.
 
  Capacity planning not falling under Compute's umbrella is news to me,
  are you referring to Gantt and scheduling in general? Perhaps I don't
  fully understand the full extent of what 'capacity planning' actually
  is.
 
  
   From the user standpoint, there are multiple ways to integrate with
   Climate
   in order to get Capacity Planning capabilities. As you perhaps
 noticed,
   the
   workflow for reserving resources is different from one plugin to
   another.
   Either we say the user has to explicitly request for dedicated
 resources
   (using Climate CLI, see dedicate compute hosts allocation), or we
   implicitly
   integrate resource allocation from the Nova API (see virtual instance
   API
   hook).
 
  I don't see how Climate reserves resources is relevant to the user.
 
  
   We truly accept our current implementation as a first prototype, where
   scheduling decisions can be improved (possibly thanks to some tight
   integration with a future external Scheduler aaS, hello Gantt), where
   also
   resource isolation and preemption must also be integrated with
   subprojects
   (we're currently seeing how to provision Cinder volumes and Neutron
   routers
   and nets), but anyway we still think there is a (IMHO big) room for
   resource
   and 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-03 Thread Thierry Carrez
Luke Gorrie wrote:
 That's a really harsh threat being made against a really vaguely defined
 group.
 
 I don't want to have to read between the lines on threats posted to
 openstack-dev to see if my reputation will be in tatters in the morning.
 
 This spoiled my day today, and I have nothing to do with whatever
 incident you guys with privileged information are talking about.

I'm sorry to hear that. I didn't mean that as a threat: my point was
that at some point, some company will be called out on their
non-cooperative behavior. It just takes an enraged person and an ML post
or a tweet. Since so far all those incidents successfully didn't name
anyone, it will feel very unfair to the first company that gets hit by
such reputational backlash.

My advice was therefore that you should not wait for that to happen to
engage in cooperative behavior, because you don't want to be the first
company to get singled out.

This kind of reputational pressure is at work all the time in the Linux
Kernel project, where companies which contribute less or engage in bad
behavior are routinely mentioned (sometimes in very offensive ways). I
don't think we can prevent this happening in OpenStack unless
cooperation stays the norm.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Russell Bryant
There has been quite a bit of discussion about the future of the v3 API
recently.  There has been growing support for the idea that we should
change course and focus on evolving the existing v2 API instead of
putting out a new major revision.  This message is a more complete
presentation of that proposal that concludes that we can do what we
really need to do with only the v2 API.

Keeping only the v2 API requires some confidence that we can stick with
it for years to come.  We don't want to be revisiting this any time
soon.  This message addresses a bunch of different questions about how
things would work if we only had v2.

1) What about tasks?

In some cases, the proposed integration of tasks is backwards
compatible.  A task ID will be added to a header.  The biggest point of
debate was if and how we would change the response for creating a
server.  For tasks in v2, we would not change the response by default.
The task ID would just be in a header.  However, if and when the client
starts exposing version support information, we can provide an
alternative/preferred response based on tasks.

For example:

   Accept: application/json;type=task

2) Versioning extensions

One of the points being addressed in the v3 API was the ability to
version extensions.  In v2, we have historically required new API
extensions, even for changes that are backwards compatible.  We propose
the following:

 - Add a version number to v2 API extensions
 - Allow backwards compatible changes to these API extensions,
accompanied by a version number increase
 - Add the option to advertise an extension as deprecated, which can be
used for all those extensions created only to advertise the availability
of new input parameters

3) Core versioning

Another pain point in API maintenance has been having to create API
extensions for every small addition to the core API.  We propose that a
version number be exposed for the core API that exposes the revision of
the core API in use.  With that in place, backwards compatible changes
such as adding a new property to a resource would be allowed when
accompanied by a version number increase.

With versioning of the core and API extensions, we will be able to cut
down significantly on the number of changes that require an API
extension without sacrificing the ability of a client to discover
whether the addition is present or not.

4) API Proxying

We don't see proxying APIs as a problem.  It is the cost we pay for
choosing to split apart projects after they are released.  We don't
think it's fair to break users just because we have chosen to split
apart the backend implementation.

Further, the APIs that are proxied are frozen while those in the other
projects are evolving.  We believe that as more features are available
only via the native APIs in Cinder, Glance, and Neutron, users will
naturally migrate over to the native APIs.

Over time, we can ensure clients are able to query the API without the
need to proxy by adding new formats or extensions that don't return data
that needed to be proxied.

5) Capitalization and Naming Consistency

Some of the changes in the v3 API included changes to capitalization and
naming for improved consistency.  If we stick with v2 only, we will not
be able to make any of these changes.  However, we believe that not
breaking any existing clients and not having to maintain a second API is
worth not making these changes, or supporting some indirection to
achieve this for newer clients if we decide it is important.

6) Response Code Consistency and Correctness

The v2 API has many places where the response code returned for a given
operation is not strictly correct. For example a 200 is returned when a
202 would be more appropriate. Correcting these issues should be
considered for improving the future use of the API, however there does
not seem to be any support for considering this a critical problem right
now. There are two approaches that can be taken to improve this in v2:

Just fix them. Right now, we return some codes that imply we have dealt
with a request, when all we have done is queue it for processing (and
vice versa). In the future, we may change the backend in such a way that
a return code needs to change to continue to be accurate anyway. If we
just assume that return codes may change to properly reflect the action
that was taken, then we can correct existing errors and move on.
Accept them as wrong but not critically so. With this approach, we can
strive for correctness in the future without changing behavior of our
existing APIs. Nobody seems to complain about them right now, so
changing them seems to be little gain. If the client begins exposing a
version header (which we need for other things) then we could
alternately start returning accurate codes for those clients.

The key point here is that we see a way forward with this in the v2 API
regardless of which path we choose.

7) Entrypoint based extensions

The v3 effort included improvements to 

Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 8:00 AM, Dina Belova dbel...@mirantis.com wrote:
 Capacity planning not falling under Compute's umbrella is news to me,

 are you referring to Gantt and scheduling in general? Perhaps I don't

 fully understand the full extent of what 'capacity planning' actually

 is.


 We intend that 'capacity' means not only 'compute capacity', but any kind of
 resources capacity - we may plan not only compute resources usage, but also
 storage, etc. Gantt is about 'where to schedule' potentially any kind of
 resources, Climate is about 'when'. Term 'scheduling' in Climate sphere has
 no reference to common OS scheduling process, here we mean something like
 'timetable', that's much closer. Capacity planning here is ability to
 predict the fact of resource usage in future (for cloud providers), and
 provide these resources to users.


 I don't see how Climate reserves resources is relevant to the user.


 From the user's point of view what is the benefit from making a

 reservation in the future? Versus what Nova supports today, asking for

 an instance in the present.


 Same thing from the operator's perspective,  what is the benefit of

 taking reservations for the future?


 Joe, I'll describe two original use cases, that were the reason for Climate
 to be created. I suppose it might give idea of what we were thinking about
 while its implementation.


 

 Host reservation use case

 


 This use case was born to fit XLcloud (http://www.xlcloud.org) project.
 Needs of this project require usage of whole hosts compute capacities
 without any other VMs running on them - and to create some kind of hosts
 usage timetable. The last thing is needed to implement energy efficiency
 for OS cloud, where these High Performance Cloud Computing jobs will be
 running on. That will allow to keep unused hosts in 'shut down' state and to
 wake them up where they'll be really needed.


 Steps for this use case may be described in the following way:


 1) admin user marks some hosts from common pool as 'reservable' - these
 hosts then will be given to user who wants to use their whole capacity and
 will be 'isolated' to the solely use of Climate until they are freed

 2) if user wants to use some of these hosts, he/she asks Climate to provide
 some amount of these hosts with any wanted specs - like amount of RAM, CPU,
 etc.

 3) Climate remembers that these hosts will be used in some time for some
 amount of time

 4) when lease (contract between user and Climate) will start, user may use
 these hosts to run VMs on them


 In future we'll implement energy efficiency, that will completely fit this
 first use case.

This sounds like something that belongs in nova, Phil Day has an
elegant solution for this:
https://blueprints.launchpad.net/nova/+spec/whole-host-allocation



 ===

 VM reservation use case

 ===


 This use case was created as a result of automated virtual resource
 allocation/reclaiming idea for dev clouds.


 If company has dev cloud for its developers, it'll be great if there won't
 be created and forgotten VMs running on - for this, process of VMs
 booting/shutting them down should be automated and processed by some
 independent service.


Heat?

I spin up dev instances all the time and have never had this problem
in part because if I forget my quota will remind me.


 Here was born idea of virtual reservations.


 Some usual workflow for this use case looks like:


 1) user knows that in future he'll need lab with some VMs in it and he asks
 Climate to reserve them (now we've implemented one VM reservation)


Why does he need to reserve them in the future? When he wants an
instance can't he just get one? As Sean said, what happens if someone
has no free quota when the reservation kicks in?

 2) Climate remembers that this VM will be used in some time for some amount
 of time (now we've implemented plugins that keeps these VMs in shelved state
 while they are not used)

 3) when lease starts, Climate wakes this VM and it's completely ready to be
 used

How is this different from 'nova boot?' When nova boot finishes the VM
is completely ready to be used



 As we've discussed with Cafe project folks
 (https://wiki.openstack.org/wiki/Cafe) - this ability will be helpful for
 academic usage, where students also need to have automatically
 allocated/reclaimed virtual resources.

Cafe sounds a bit like more sophisticated quotas, quotas that work on
overall usage not just current usage.



 =


 As said, these two use cases were original ones, and now we're thinking
 about more general view - any type of physical/virtual resource to be
 managed in time. For example, we're planning to implement complex virtual
 resource reservations (like Heat stacks), that will allow
 developers/students to have working small OS clouds by the time they'll need
 them.

I can do that today, I spin a up a devstack VM in the 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Dolph Mathews
On Mon, Mar 3, 2014 at 8:48 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
  Having done some work with MySQL (specifically around similar data
  sets) and discussing the changes with some former coworkers (MySQL
  experts) I am inclined to believe the move from varchar  to binary
  absolutely would increase performance like this.
 
 
  However, I would like to get some real benchmarks around it and if it
  really makes a difference we should get a smart UUID type into the
  common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
  conversation. Should be split off from the keystone one at hand - I
  don't want valuable information / discussions to get lost.

 No disagreement on either point. However, this should be done after the
 standardization to a UUID user_id in Keystone, as a separate performance
 improvement patch. Agree?

 Best,
 -jay



++ https://blueprints.launchpad.net/keystone/+spec/sql-id-binary-16



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 9:22 AM, Dina Belova dbel...@mirantis.com wrote:
 Joe, as said, Amazon reservation is not like implemented in Climate - and
 really we had different original use cases to have the same result. Amazon
 instances reservations do not guarantee that instance will be provided to
 user, as in Climate we started implemented reservations possibilities with
 this guarantee (due to original use cases). That's why we're mostly speaking
 about time-based resource management now, not billing purposes.


Reliable
Reserved Instances provide a capacity reservation so that you can have
confidence in your ability to launch the number of instances you have
reserved when you need them.
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

That sounds like a guarantee to me.

How does climate enforce a guarantee while freeing the physical
resource for something else to use? Otherwise you are consuming a
resource without charging anyone (assuming you don't charge anything
for a reservation for a lease that hasn't started.


 Lease creation request now contains the following steps: create lease -
 start lease - end lease
 Also there are user notifications, but they are connected with lease
 start/end events, so that's not some separated stuff now.

 Although, if we'll implement one more second step like 'allocate resources'
 - that will allow us to have reservations with no guarantees, and that will
 make Climate possibilities containing Amazon use case.

 Thanks


 On Mon, Mar 3, 2014 at 9:04 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 6:27 AM, Anne Gentle a...@openstack.org wrote:
 
 
  On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com
  wrote:
 
  On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
  wrote:
   Hi Joe,
  
   Thanks for your reply, I'll try to further explain.
  
  
   Le 03/03/2014 05:33, Joe Gordon a écrit :
  
   On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
   wrote:
  
   Hello, folks!
  
   I'd like to request Climate project review for incubation. Here is
   official
   incubation application:
  
   https://wiki.openstack.org/wiki/Climate/Incubation
  
   I'm unclear on what Climate is trying to solve. I read the 'Detailed
   Description' from the link above, and it states Climate is trying to
   solve two uses cases (and the more generalized cases of those).
  
   1) Compute host reservation (when user with admin privileges can
   reserve hardware resources that are dedicated to the sole use of a
   tenant)
   2) Virtual machine (instance) reservation (when user may ask
   reservation service to provide him working VM not necessary now, but
   also in the future)
  
   Climate is born from the idea of dedicating compute resources to a
   single
   tenant or user for a certain amount of time, which was not yet
   implemented
   in Nova: how as an user, can I ask Nova for one compute host with
   certain
   specs to be exclusively allocated to my needs, starting in 2 days and
   being
   freed in 5 days ?
  
   Albeit the exclusive resource lock can be managed on the Nova side,
   there is
   currently no possibilities to ensure resource planner.
  
   Of course, and that's why we think Climate can also stand by its own
   Program, resource reservation can be seen on a more general way :
   what
   about
   reserving an Heat stack with its volume and network nested resources
   ?
  
  
   You want to support being able to reserve an instance in the future.
   As a cloud operator how do I take advantage of that information? As
   a
   cloud consumer, what is the benefit? Today OpenStack supports both
   uses cases, except it can't request an Instance for the future.
  
  
   Again, that's not only reserving an instance, but rather a complex
   mix
   of
   resources. At the moment, we do provide way to reserve virtual
   instances
   by
   shelving/unshelving them at the lease start, but we also give
   possibility to
   provide dedicated compute hosts. Considering it, the logic of
   resource
   allocation and scheduling (take the word as resource planner, in
   order
   not
   to confuse with Nova's scheduler concerns) and capacity planning is
   too
   big
   to fail under the Compute's umbrella, as it has been agreed within
   the
   Summit talks and periodical threads.
 
  Capacity planning not falling under Compute's umbrella is news to me,
  are you referring to Gantt and scheduling in general? Perhaps I don't
  fully understand the full extent of what 'capacity planning' actually
  is.
 
  
   From the user standpoint, there are multiple ways to integrate with
   Climate
   in order to get Capacity Planning capabilities. As you perhaps
   noticed,
   the
   workflow for reserving resources is different from one plugin to
   another.
   Either we say the user has to explicitly request for dedicated
   resources
   (using Climate CLI, see dedicate compute hosts allocation), or we
   

Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Zane Bitter

On 03/03/14 12:32, Joe Gordon wrote:

- if you're reserving resources far before you'll need them, it'll be
cheaper

Why? How does this save a provider money?


If an operator has zero information about the level of future demand, 
they will have to spend a *lot* of money on excess capacity or risk 
running out. If an operator has perfect information about future demand, 
then they need spend no money on excess capacity. Everywhere in between, 
the amount of extra money they need to spend is a non-increasing 
function of the amount of information they have. QED.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-03 Thread John Griffith
On Mon, Mar 3, 2014 at 10:30 AM, Thierry Carrez thie...@openstack.orgwrote:

 Luke Gorrie wrote:
  That's a really harsh threat being made against a really vaguely defined
  group.
 
  I don't want to have to read between the lines on threats posted to
  openstack-dev to see if my reputation will be in tatters in the morning.
 
  This spoiled my day today, and I have nothing to do with whatever
  incident you guys with privileged information are talking about.

 I'm sorry to hear that. I didn't mean that as a threat: my point was
 that at some point, some company will be called out on their
 non-cooperative behavior. It just takes an enraged person and an ML post
 or a tweet. Since so far all those incidents successfully didn't name
 anyone, it will feel very unfair to the first company that gets hit by
 such reputational backlash.

 My advice was therefore that you should not wait for that to happen to
 engage in cooperative behavior, because you don't want to be the first
 company to get singled out.

 This kind of reputational pressure is at work all the time in the Linux
 Kernel project, where companies which contribute less or engage in bad
 behavior are routinely mentioned (sometimes in very offensive ways). I
 don't think we can prevent this happening in OpenStack unless
 cooperation stays the norm.

 Well put Thierry!

Just to add, part of the intent here is that it's good to do things
publicly so as to avoid any misconceptions.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Reviewers] Please add openstack/os-cloud-config to your tripleo-repositories-to-review

2014-03-03 Thread Jay Dobies
I updated https://wiki.openstack.org/wiki/TripleO and the link at the 
All TripleO Reviews at the bottom to include it.


On 03/02/2014 12:07 AM, Robert Collins wrote:

This is a new repository to provide common code for tuskar and the
seed initialisation logic - the post heat completion initial
configuration of a cloud.

Cheers,
Rob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Sylvain Bauza
Hi Joe,


2014-03-03 18:32 GMT+01:00 Joe Gordon joe.gord...@gmail.com:



 This sounds like something that belongs in nova, Phil Day has an
 elegant solution for this:
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation


This blueprint has already been addressed by Climate team, and we discussed
about this with Phil directly.
This blueprint has been recently abandoned by its author and Phil is trying
to focus on dedicated instances instead.

As we identified this blueprint as non-supported yet, we implemented its
logic directly within Climate. That said, don't confuse 2 different things
:
 - the locking process for isolating one compute node to a single tenant :
that should be done in Nova
 - the timetable for scheduling hosts and electing which ones are
appropriate : that must be done in Climate (and in the future, should use
Gantt as external scheduler for electing from a pool of available hosts on
that timeframe)

Don't let me say that the resource isolation must be done within Climate :
I'm definitely conviced that this logic should be done on the resource
project level (Nova, Cinder, Neutron) and Climate should use their
respective CLI for asking isolation.
The overall layer for defining what will available when, and what are the
dependencies in between projects, still relies on a shared service, which
is Climate.



 Heat?

 I spin up dev instances all the time and have never had this problem
 in part because if I forget my quota will remind me.


How do you ensure that you won't run out of resources when firing up an
instance in 3 days ? How can you guaranttee that in the next couple of
days, you would be able to create a volume with 50GB of space ?

I'm not saying that the current Climate implementation does all the work.
I'm just saying it's duty of Climate to look at Quality of Service aspects
for resource allocation (or say SLA if you prefer)



 Why does he need to reserve them in the future? When he wants an
 instance can't he just get one? As Sean said, what happens if someone
 has no free quota when the reservation kicks in?


That's the role of the resource plugin to manage capacity and ensure
everything can be charged correctly.
Regarding the virtual instances plugin logic, that's something that can be
improved, but consider the thing that the instance is already created but
not spawned when the lease is created, so that the quota is decreased from
one.

With the compute hosts plugin, we manage availability thanks to a resource
planner, based on a fixed set of resources (enrolled compute hosts within
Climate), so we can almost guaranttee this (minus the hosts outages we
could get, of course)




 How is this different from 'nova boot?' When nova boot finishes the VM
 is completely ready to be used



Nova boot directly creates the VM when the command is issued, while the
proposal here is to defer the booting itself only at the lease start (which
can happen far later than when the lease is created)



  - if you're reserving resources far before you'll need them, it'll be
  cheaper

 Why? How does this save a provider money?


From a cloud operator point of view, don't you think it's way preferrable
to get feedback for future capacity needs ?
Don't you feel it would be interesting for him to propose a business model
like this ?





 Reserved Instances provide a capacity reservation so that you can
 have confidence in your ability to launch the number of instances you
 have reserved when you need them.
 https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

 Amazon does guarantee the resource will be available.  Amazon can
 guarantee the resource because they can terminate spot instances at
 will, but OpenStack doesn't have this concept today.


That's why we think there is a need for guarantteing resource allocation
within Openstack.
Spot instances can be envisaged thanks to Climate as a formal contract for
reserving resources that can be freed if needed.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-03 Thread Kurt Griffiths
Hi folks, I’d like to propose adding Fei Long Wang (flwang) as a core reviewer 
on the Marconi team. He has been contributing regularly over the past couple of 
months, and has proven to be a careful reviewer with good judgment.

All Marconi ATC’s, please respond with a +1 or –1.

Cheers,
Kurt G. | @kgriffs | Marconi PTL
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.

 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.

 1) What about tasks?

 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.

 For example:

Accept: application/json;type=task

The current https://wiki.openstack.org/wiki/APIChangeGuidelines says
its OK add a new response header, so do we even need this?



 2) Versioning extensions

 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:

  - Add a version number to v2 API extensions
  - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
  - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters

Can you elaborate on this last point.


 3) Core versioning

 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.

 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.

++, looks like we may need to update
https://wiki.openstack.org/wiki/APIChangeGuidelines and make this
clear to downstream users.


 4) API Proxying

 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.

 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.

 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.

Can you expand on what this last paragraph means?

While I agree in not breaking users. I assume this means we won't
accept any new proxy APIs?


 5) Capitalization and Naming Consistency

 Some of the changes in the v3 API included changes to capitalization and
 naming for improved consistency.  If we stick with v2 only, we will not
 be able to make any of these changes.  However, we believe that not
 breaking any existing clients and not having to maintain a second API is
 worth not making these changes, or supporting some indirection to
 achieve this for newer clients if we decide it is important.


++


 6) Response Code Consistency and Correctness

 The v2 API has many places where the response code returned for a given
 operation is not strictly correct. For example a 200 is returned when a
 202 would be more appropriate. Correcting these issues should be
 considered for improving the future use of the API, however there does
 not seem to be any support for considering this a critical problem right
 now. There are two approaches that can be taken to improve this in v2:

 Just fix them. Right now, we return some codes that imply we have dealt
 with a request, when all we have done is queue it for processing (and
 vice versa). In the future, we may change the backend in such a way that
 a return code needs to change to continue to be accurate anyway. If we
 just assume that return codes may change to properly reflect the action
 that was taken, then we can correct existing 

Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 10:01 AM, Zane Bitter zbit...@redhat.com wrote:
 On 03/03/14 12:32, Joe Gordon wrote:

 - if you're reserving resources far before you'll need them, it'll be
 cheaper

 Why? How does this save a provider money?


 If an operator has zero information about the level of future demand, they
 will have to spend a *lot* of money on excess capacity or risk running out.
 If an operator has perfect information about future demand, then they need
 spend no money on excess capacity. Everywhere in between, the amount of
 extra money they need to spend is a non-increasing function of the amount of
 information they have. QED.

Sure. if an operator has perfect information about future demand they
won't need any excess capacity. But assuming you know some future
demand, how do you figure out how much of the future demand you know?
But sure I can see this as a potential money saver, but unclear by how
much. The Amazon model for this is a reservation is at minimum a year,
I am not sure how useful short term reservations would be in
determining future demand.


 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Dina Belova
Oh, Sylvain, you were first :)

I have just small things to add here: Joe, resource usage planning is great
feature, that, I believe, is not supported in OS services now. Resource
planning will allow cloud providers to react on future picks of loads,
because they *will know* about that. As Zane said, otherwise you need to
spend much money keeping much extra capacity in common pool, or have real
risks to run out of resources.

Everything else was described by Sylvain, I guess :)

-- Dina


On Mon, Mar 3, 2014 at 10:13 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Hi Joe,


 2014-03-03 18:32 GMT+01:00 Joe Gordon joe.gord...@gmail.com:



 This sounds like something that belongs in nova, Phil Day has an
 elegant solution for this:
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation


 This blueprint has already been addressed by Climate team, and we
 discussed about this with Phil directly.
 This blueprint has been recently abandoned by its author and Phil is
 trying to focus on dedicated instances instead.

 As we identified this blueprint as non-supported yet, we implemented its
 logic directly within Climate. That said, don't confuse 2 different things
 :
  - the locking process for isolating one compute node to a single tenant :
 that should be done in Nova
  - the timetable for scheduling hosts and electing which ones are
 appropriate : that must be done in Climate (and in the future, should use
 Gantt as external scheduler for electing from a pool of available hosts on
 that timeframe)

 Don't let me say that the resource isolation must be done within Climate :
 I'm definitely conviced that this logic should be done on the resource
 project level (Nova, Cinder, Neutron) and Climate should use their
 respective CLI for asking isolation.
 The overall layer for defining what will available when, and what are the
 dependencies in between projects, still relies on a shared service, which
 is Climate.



 Heat?

 I spin up dev instances all the time and have never had this problem
 in part because if I forget my quota will remind me.


 How do you ensure that you won't run out of resources when firing up an
 instance in 3 days ? How can you guaranttee that in the next couple of
 days, you would be able to create a volume with 50GB of space ?

 I'm not saying that the current Climate implementation does all the work.
 I'm just saying it's duty of Climate to look at Quality of Service aspects
 for resource allocation (or say SLA if you prefer)



 Why does he need to reserve them in the future? When he wants an
 instance can't he just get one? As Sean said, what happens if someone
 has no free quota when the reservation kicks in?


 That's the role of the resource plugin to manage capacity and ensure
 everything can be charged correctly.
 Regarding the virtual instances plugin logic, that's something that can be
 improved, but consider the thing that the instance is already created but
 not spawned when the lease is created, so that the quota is decreased from
 one.

 With the compute hosts plugin, we manage availability thanks to a resource
 planner, based on a fixed set of resources (enrolled compute hosts within
 Climate), so we can almost guaranttee this (minus the hosts outages we
 could get, of course)




 How is this different from 'nova boot?' When nova boot finishes the VM
 is completely ready to be used



 Nova boot directly creates the VM when the command is issued, while the
 proposal here is to defer the booting itself only at the lease start (which
 can happen far later than when the lease is created)



  - if you're reserving resources far before you'll need them, it'll be
  cheaper

 Why? How does this save a provider money?


 From a cloud operator point of view, don't you think it's way preferrable
 to get feedback for future capacity needs ?
 Don't you feel it would be interesting for him to propose a business model
 like this ?





 Reserved Instances provide a capacity reservation so that you can
 have confidence in your ability to launch the number of instances you
 have reserved when you need them.
 https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

 Amazon does guarantee the resource will be available.  Amazon can
 guarantee the resource because they can terminate spot instances at
 will, but OpenStack doesn't have this concept today.


 That's why we think there is a need for guarantteing resource allocation
 within Openstack.
 Spot instances can be envisaged thanks to Climate as a formal contract for
 reserving resources that can be freed if needed.

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-03 Thread Cindy Pallares
On Mon, Mar 3, 2014 at 12:29 PM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:
Hi folks, I’d like to propose adding Fei Long Wang (flwang) as a 
core reviewer on the Marconi team. He has been contributing regularly 
over the past couple of months, and has proven to be a careful 
reviewer with good judgment.


All Marconi ATC’s, please respond with a +1 or –1.

Cheers,
Kurt G. | @kgriffs | Marconi PTL



+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Sean Dague
On 03/03/2014 01:35 PM, Joe Gordon wrote:
 On Mon, Mar 3, 2014 at 10:01 AM, Zane Bitter zbit...@redhat.com wrote:
 On 03/03/14 12:32, Joe Gordon wrote:

 - if you're reserving resources far before you'll need them, it'll be
 cheaper

 Why? How does this save a provider money?


 If an operator has zero information about the level of future demand, they
 will have to spend a *lot* of money on excess capacity or risk running out.
 If an operator has perfect information about future demand, then they need
 spend no money on excess capacity. Everywhere in between, the amount of
 extra money they need to spend is a non-increasing function of the amount of
 information they have. QED.
 
 Sure. if an operator has perfect information about future demand they
 won't need any excess capacity. But assuming you know some future
 demand, how do you figure out how much of the future demand you know?
 But sure I can see this as a potential money saver, but unclear by how
 much. The Amazon model for this is a reservation is at minimum a year,
 I am not sure how useful short term reservations would be in
 determining future demand.

There are other useful things with reservations though. In a private
context the classic one is running number for close of business. Or
software team that's working towards a release might want to preallocate
resources for longer scale runs during a particular week.

Reservation can really be about global policy giving some tenants more
priority in getting resources than others (because you pre-allocate them).

I also know that with a lot of the HPC teams using OpenStack, this is a
fundamental part of scheduling. Not just the when, but the how long.
Having systems automatically get reaped after a certain amount of time
is something they very much want.

So I think the general idea have merrit. I just think we need to make
sure it integrates well with the rest of OpenStack, which I believe
means strong coupling to the scheduler.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 10:13 AM, Sylvain Bauza sylvain.ba...@gmail.com wrote:
 Hi Joe,


 2014-03-03 18:32 GMT+01:00 Joe Gordon joe.gord...@gmail.com:



 This sounds like something that belongs in nova, Phil Day has an
 elegant solution for this:
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation


 This blueprint has already been addressed by Climate team, and we discussed
 about this with Phil directly.
 This blueprint has been recently abandoned by its author and Phil is trying
 to focus on dedicated instances instead.

 As we identified this blueprint as non-supported yet, we implemented its
 logic directly within Climate. That said, don't confuse 2 different things :
  - the locking process for isolating one compute node to a single tenant :
 that should be done in Nova
  - the timetable for scheduling hosts and electing which ones are
 appropriate : that must be done in Climate (and in the future, should use
 Gantt as external scheduler for electing from a pool of available hosts on
 that timeframe)

 Don't let me say that the resource isolation must be done within Climate :
 I'm definitely conviced that this logic should be done on the resource
 project level (Nova, Cinder, Neutron) and Climate should use their
 respective CLI for asking isolation.
 The overall layer for defining what will available when, and what are the
 dependencies in between projects, still relies on a shared service, which is
 Climate.



 Heat?

 I spin up dev instances all the time and have never had this problem
 in part because if I forget my quota will remind me.


 How do you ensure that you won't run out of resources when firing up an
 instance in 3 days ? How can you guaranttee that in the next couple of days,
 you would be able to create a volume with 50GB of space ?

I have access to 2 public clouds that both would be very embarrassed
if they ran out of capacity, so they make sure that doesn't happen.


 I'm not saying that the current Climate implementation does all the work.
 I'm just saying it's duty of Climate to look at Quality of Service aspects
 for resource allocation (or say SLA if you prefer)



 Why does he need to reserve them in the future? When he wants an
 instance can't he just get one? As Sean said, what happens if someone
 has no free quota when the reservation kicks in?


 That's the role of the resource plugin to manage capacity and ensure
 everything can be charged correctly.
 Regarding the virtual instances plugin logic, that's something that can be
 improved, but consider the thing that the instance is already created but
 not spawned when the lease is created, so that the quota is decreased from
 one.

 With the compute hosts plugin, we manage availability thanks to a resource
 planner, based on a fixed set of resources (enrolled compute hosts within
 Climate), so we can almost guaranttee this (minus the hosts outages we could
 get, of course)

I don't follow, how does climate make these guarantees?





 How is this different from 'nova boot?' When nova boot finishes the VM
 is completely ready to be used



 Nova boot directly creates the VM when the command is issued, while the
 proposal here is to defer the booting itself only at the lease start (which
 can happen far later than when the lease is created)

Why can't I just run 'nova boot' when I want the lease to start.




  - if you're reserving resources far before you'll need them, it'll be
  cheaper

 Why? How does this save a provider money?


 From a cloud operator point of view, don't you think it's way preferrable to
 get feedback for future capacity needs ?
 Don't you feel it would be interesting for him to propose a business model
 like this ?


Not really, the amazon model, where a reservation is long term makes
sense, but I don't see how short term reservations would help cloud
operators.






 Reserved Instances provide a capacity reservation so that you can
 have confidence in your ability to launch the number of instances you
 have reserved when you need them.
 https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

 Amazon does guarantee the resource will be available.  Amazon can
 guarantee the resource because they can terminate spot instances at
 will, but OpenStack doesn't have this concept today.


 That's why we think there is a need for guarantteing resource allocation
 within Openstack.
 Spot instances can be envisaged thanks to Climate as a formal contract for
 reserving resources that can be freed if needed.

I like this idea, but something should be in Nova. Not sure how this
concept of spot instances works for other resource such as object or
volume storage.


 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-03 Thread Malini Kamalambal
+1

From: Cindy Pallares 
cindy.pallar...@gmail.commailto:cindy.pallar...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 3, 2014 1:41 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) 
as a core reviewer

On Mon, Mar 3, 2014 at 12:29 PM, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:
Hi folks, I’d like to propose adding Fei Long Wang (flwang) as a core reviewer 
on the Marconi team. He has been contributing regularly over the past couple of 
months, and has proven to be a careful reviewer with good judgment.

All Marconi ATC’s, please respond with a +1 or –1.

Cheers,
Kurt G. | @kgriffs | Marconi PTL

+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Russell Bryant
On 03/03/2014 01:27 PM, Joe Gordon wrote:
 On Mon, Mar 3, 2014 at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:
 1) What about tasks?

 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.

 For example:

Accept: application/json;type=task
 
 The current https://wiki.openstack.org/wiki/APIChangeGuidelines says
 its OK add a new response header, so do we even need this?

This is for the case that we want to actually change the response body.

  - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters
 
 Can you elaborate on this last point.

We have previously required API extensions for adding some things like
input parameters or attributes on a resource.  The addition of
versioning for extensions allow us to do this without adding extensions.
 The point is just that it would be nice if we can mark extensions in
this category as deprecated and possibly removed since we can express
these things in terms of versions instead.


 3) Core versioning

 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.

 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.
 
 ++, looks like we may need to update
 https://wiki.openstack.org/wiki/APIChangeGuidelines and make this
 clear to downstream users.

Right, just shooting for some agreement first.


 4) API Proxying

 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.

 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.

 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.
 
 Can you expand on what this last paragraph means?
 
 While I agree in not breaking users. I assume this means we won't
 accept any new proxy APIs?

If proxying is required to make an existing API continue to work, we
should accept it.

 6) Response Code Consistency and Correctness

 The v2 API has many places where the response code returned for a given
 operation is not strictly correct. For example a 200 is returned when a
 202 would be more appropriate. Correcting these issues should be
 considered for improving the future use of the API, however there does
 not seem to be any support for considering this a critical problem right
 now. There are two approaches that can be taken to improve this in v2:

 Just fix them. Right now, we return some codes that imply we have dealt
 with a request, when all we have done is queue it for processing (and
 vice versa). In the future, we may change the backend in such a way that
 a return code needs to change to continue to be accurate anyway. If we
 just assume that return codes may change to properly reflect the action
 that was taken, then we can correct existing errors and move on.
 
 Changing return codes always scares me, we risk breaking code that
 says if '==200.' Although having versioned backwards compatable APIs
 makes this a little better.

See below ...

 Accept them as wrong but not critically so. With this approach, we can
 strive for correctness in the future without changing behavior of our
 existing APIs. Nobody seems to complain about them right now, so
 changing them seems to be little gain. If the client begins exposing a
 version header (which we need for other things) then we could
 alternately start returning accurate codes for those clients.
 
 Wait what? client needs version headers? Can you expand

Again see below ...

 ++ to accepting them as wrong and moving on.

 The key point here is that we see a way forward with this in the v2 API
 regardless of which path we choose.

This last 

Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 10:39 AM, Dina Belova dbel...@mirantis.com wrote:
 Oh, Sylvain, you were first :)

 I have just small things to add here: Joe, resource usage planning is great
 feature, that, I believe, is not supported in OS services now. Resource
 planning will allow cloud providers to react on future picks of loads,
 because they *will know* about that. As Zane said, otherwise you need to
 spend much money keeping much extra capacity in common pool, or have real
 risks to run out of resources.

I think OpenStack supports resource planning today, otherwise how
could public clouds do it? You can look at usage trends and make an
educated guess what future usage will be and plan accordingly.

I like amazon's concept of reserved instances, where reservations are
long term (1 to 3 years)
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

 Everything else was described by Sylvain, I guess :)

 -- Dina


 On Mon, Mar 3, 2014 at 10:13 PM, Sylvain Bauza sylvain.ba...@gmail.com
 wrote:

 Hi Joe,


 2014-03-03 18:32 GMT+01:00 Joe Gordon joe.gord...@gmail.com:



 This sounds like something that belongs in nova, Phil Day has an
 elegant solution for this:
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation


 This blueprint has already been addressed by Climate team, and we
 discussed about this with Phil directly.
 This blueprint has been recently abandoned by its author and Phil is
 trying to focus on dedicated instances instead.

 As we identified this blueprint as non-supported yet, we implemented its
 logic directly within Climate. That said, don't confuse 2 different things :
  - the locking process for isolating one compute node to a single tenant :
 that should be done in Nova
  - the timetable for scheduling hosts and electing which ones are
 appropriate : that must be done in Climate (and in the future, should use
 Gantt as external scheduler for electing from a pool of available hosts on
 that timeframe)

 Don't let me say that the resource isolation must be done within Climate :
 I'm definitely conviced that this logic should be done on the resource
 project level (Nova, Cinder, Neutron) and Climate should use their
 respective CLI for asking isolation.
 The overall layer for defining what will available when, and what are the
 dependencies in between projects, still relies on a shared service, which is
 Climate.



 Heat?

 I spin up dev instances all the time and have never had this problem
 in part because if I forget my quota will remind me.


 How do you ensure that you won't run out of resources when firing up an
 instance in 3 days ? How can you guaranttee that in the next couple of days,
 you would be able to create a volume with 50GB of space ?

 I'm not saying that the current Climate implementation does all the work.
 I'm just saying it's duty of Climate to look at Quality of Service aspects
 for resource allocation (or say SLA if you prefer)



 Why does he need to reserve them in the future? When he wants an
 instance can't he just get one? As Sean said, what happens if someone
 has no free quota when the reservation kicks in?


 That's the role of the resource plugin to manage capacity and ensure
 everything can be charged correctly.
 Regarding the virtual instances plugin logic, that's something that can be
 improved, but consider the thing that the instance is already created but
 not spawned when the lease is created, so that the quota is decreased from
 one.

 With the compute hosts plugin, we manage availability thanks to a resource
 planner, based on a fixed set of resources (enrolled compute hosts within
 Climate), so we can almost guaranttee this (minus the hosts outages we could
 get, of course)




 How is this different from 'nova boot?' When nova boot finishes the VM
 is completely ready to be used



 Nova boot directly creates the VM when the command is issued, while the
 proposal here is to defer the booting itself only at the lease start (which
 can happen far later than when the lease is created)



  - if you're reserving resources far before you'll need them, it'll be
  cheaper

 Why? How does this save a provider money?


 From a cloud operator point of view, don't you think it's way preferrable
 to get feedback for future capacity needs ?
 Don't you feel it would be interesting for him to propose a business model
 like this ?





 Reserved Instances provide a capacity reservation so that you can
 have confidence in your ability to launch the number of instances you
 have reserved when you need them.
 https://aws.amazon.com/ec2/purchasing-options/reserved-instances/

 Amazon does guarantee the resource will be available.  Amazon can
 guarantee the resource because they can terminate spot instances at
 will, but OpenStack doesn't have this concept today.


 That's why we think there is a need for guarantteing resource allocation
 within Openstack.
 Spot instances can be envisaged thanks to Climate as a formal contract for
 reserving 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Andrew Laski

On 03/03/14 at 10:27am, Joe Gordon wrote:

On Mon, Mar 3, 2014 at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:

There has been quite a bit of discussion about the future of the v3 API
recently.  There has been growing support for the idea that we should
change course and focus on evolving the existing v2 API instead of
putting out a new major revision.  This message is a more complete
presentation of that proposal that concludes that we can do what we
really need to do with only the v2 API.

Keeping only the v2 API requires some confidence that we can stick with
it for years to come.  We don't want to be revisiting this any time
soon.  This message addresses a bunch of different questions about how
things would work if we only had v2.

1) What about tasks?

In some cases, the proposed integration of tasks is backwards
compatible.  A task ID will be added to a header.  The biggest point of
debate was if and how we would change the response for creating a
server.  For tasks in v2, we would not change the response by default.
The task ID would just be in a header.  However, if and when the client
starts exposing version support information, we can provide an
alternative/preferred response based on tasks.

For example:

   Accept: application/json;type=task


The current https://wiki.openstack.org/wiki/APIChangeGuidelines says
its OK add a new response header, so do we even need this?


This would be needed in order to add more than the response header.  
Ideally the task(s) would be returned in the response body as well.  But 
we can get started by using the header until we have a way for clients 
to indicate a request version.







2) Versioning extensions

One of the points being addressed in the v3 API was the ability to
version extensions.  In v2, we have historically required new API
extensions, even for changes that are backwards compatible.  We propose
the following:

 - Add a version number to v2 API extensions
 - Allow backwards compatible changes to these API extensions,
accompanied by a version number increase
 - Add the option to advertise an extension as deprecated, which can be
used for all those extensions created only to advertise the availability
of new input parameters


Can you elaborate on this last point.


There are some extensions whose sole purpose is to enable accepting a 
new request parameter or adding a field to a response.  Some of them are 
just feature flags for other extensions (user_quotas), and some modify 
the response themselves (flavor_swap).  In both cases the functionality 
could be merged into another extension with a version bump and then 
they could be marked as deprecated.






3) Core versioning

Another pain point in API maintenance has been having to create API
extensions for every small addition to the core API.  We propose that a
version number be exposed for the core API that exposes the revision of
the core API in use.  With that in place, backwards compatible changes
such as adding a new property to a resource would be allowed when
accompanied by a version number increase.

With versioning of the core and API extensions, we will be able to cut
down significantly on the number of changes that require an API
extension without sacrificing the ability of a client to discover
whether the addition is present or not.


++, looks like we may need to update
https://wiki.openstack.org/wiki/APIChangeGuidelines and make this
clear to downstream users.



4) API Proxying

We don't see proxying APIs as a problem.  It is the cost we pay for
choosing to split apart projects after they are released.  We don't
think it's fair to break users just because we have chosen to split
apart the backend implementation.

Further, the APIs that are proxied are frozen while those in the other
projects are evolving.  We believe that as more features are available
only via the native APIs in Cinder, Glance, and Neutron, users will
naturally migrate over to the native APIs.

Over time, we can ensure clients are able to query the API without the
need to proxy by adding new formats or extensions that don't return data
that needed to be proxied.


Can you expand on what this last paragraph means?

While I agree in not breaking users. I assume this means we won't
accept any new proxy APIs?



5) Capitalization and Naming Consistency

Some of the changes in the v3 API included changes to capitalization and
naming for improved consistency.  If we stick with v2 only, we will not
be able to make any of these changes.  However, we believe that not
breaking any existing clients and not having to maintain a second API is
worth not making these changes, or supporting some indirection to
achieve this for newer clients if we decide it is important.



++



6) Response Code Consistency and Correctness

The v2 API has many places where the response code returned for a given
operation is not strictly correct. For example a 200 is returned when a
202 would be more appropriate. Correcting these 

Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-03-03 Thread Ziad Sawalha
Guido kindly updated PEP-257 for us[1]. So now the hacking guide content 
accurately matches PEP-257 (no extra line required at the end of a multi-line 
docstring).

This alone should resolve the patch and comments that initiated this discussion.

With regards to automating the checks and gates, we’ll experiment with gating 
on this using the flake8 pep257 plugin[2] and submit the jobs that come out of 
that to stackforge. There are still some patches[3] to get in to pep257 and the 
flake8 plugin (for PEP-257 and for compatibility with the latest pep8 version).

Z

[1] https://codereview.appspot.com/69870043
[2] https://pypi.python.org/pypi/flake8-docstrings/0.1.4
[3] https://github.com/GreenSteam/pep257/pull/64


On Feb 24, 2014, at 6:56 PM, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com wrote:

Seeking some clarification on the OpenStack hacking guidelines for multi-string 
docstrings.

Q: In OpenStack projects, is a blank line before the triple closing quotes 
recommended (and therefore optional - this is what PEP-257 seems to suggest), 
required, or explicitly rejected (which could be one way to interpret the 
hacking guidelines since they omit the blank line).

This came up in a commit review, and here are some references on the topic:

Quoting PEP-257: “The BDFL [3] recommends inserting a blank line between the 
last paragraph in a multi-line docstring and its closing quotes, placing the 
closing quotes on a line by themselves. This way, Emacs' fill-paragraph command 
can be used on it.”

Sample from pep257 (with extra blank line):

def complex(real=0.0, imag=0.0):
Form a complex number.

Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)


if imag == 0.0 and real == 0.0: return complex_zero
...

The multi-line docstring example in 
http://docs.openstack.org/developer/hacking/ has no extra blank line before the 
ending triple-quotes:

A multi line docstring has a one-line summary, less than 80 characters.

Then a new paragraph after a newline that explains in more detail any
general information about the function, class or method. Example usages
are also great to have here if it is a complex class for function.

When writing the docstring for a class, an extra line should be placed
after the closing quotations. For more in-depth explanations for these
decisions see http://www.python.org/dev/peps/pep-0257/

If you are going to describe parameters and return values, use Sphinx, the
appropriate syntax is as follows.

:param foo: the foo parameter
:param bar: the bar parameter
:returns: return_type -- description of the return value
:returns: description of the return value
:raises: AttributeError, KeyError


Regards,

Ziad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-03 Thread Roman Podoliaka
Hi all,

This is just one another example of MySQL not having production ready
defaults. The original idea was to force setting the SQL mode to
TRADITIONAL in code in projects using oslo.db code when they are ready
(unit and functional tests pass). So the warning was actually for
developers rather than for users.

Sync of the latest oslo.db code will make users able to set any SQL mode
you like (default is TRADITIONAL now, so the warning is gone).

Thanks,
Roman

On Mar 2, 2014 8:36 PM, John Griffith john.griff...@solidfire.com wrote:




 On Sun, Mar 2, 2014 at 7:42 PM, Sean Dague s...@dague.net wrote:

 Coming in at slightly less than 1 million log lines in the last 7 days:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=

 This application has not enabled MySQL traditional mode, which means
 silent data corruption may occur

 This is being generated by  *.openstack.common.db.sqlalchemy.session in
 at least nova, glance, neutron, heat, ironic, and savana


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9


 At any rate, it would be good if someone that understood the details
 here could weigh in about whether is this really a true WARNING that
 needs to be fixed or if it's not, and just needs to be silenced.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I came across this earlier this week when I was looking at this in
Cinder, haven't completely gone into detail here, but maybe Florian or Doug
have some insight?

 https://bugs.launchpad.net/oslo/+bug/1271706

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Anne Gentle
On Mon, Mar 3, 2014 at 11:04 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 6:27 AM, Anne Gentle a...@openstack.org wrote:
 
 
  On Mon, Mar 3, 2014 at 8:20 AM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Mon, Mar 3, 2014 at 4:42 AM, Sylvain Bauza sylvain.ba...@bull.net
  wrote:
   Hi Joe,
  
   Thanks for your reply, I'll try to further explain.
  
  
   Le 03/03/2014 05:33, Joe Gordon a écrit :
  
   On Sun, Mar 2, 2014 at 11:32 AM, Dina Belova dbel...@mirantis.com
   wrote:
  
   Hello, folks!
  
   I'd like to request Climate project review for incubation. Here is
   official
   incubation application:
  
   https://wiki.openstack.org/wiki/Climate/Incubation
  
   I'm unclear on what Climate is trying to solve. I read the 'Detailed
   Description' from the link above, and it states Climate is trying to
   solve two uses cases (and the more generalized cases of those).
  
   1) Compute host reservation (when user with admin privileges can
   reserve hardware resources that are dedicated to the sole use of a
   tenant)
   2) Virtual machine (instance) reservation (when user may ask
   reservation service to provide him working VM not necessary now, but
   also in the future)
  
   Climate is born from the idea of dedicating compute resources to a
   single
   tenant or user for a certain amount of time, which was not yet
   implemented
   in Nova: how as an user, can I ask Nova for one compute host with
   certain
   specs to be exclusively allocated to my needs, starting in 2 days and
   being
   freed in 5 days ?
  
   Albeit the exclusive resource lock can be managed on the Nova side,
   there is
   currently no possibilities to ensure resource planner.
  
   Of course, and that's why we think Climate can also stand by its own
   Program, resource reservation can be seen on a more general way : what
   about
   reserving an Heat stack with its volume and network nested resources ?
  
  
   You want to support being able to reserve an instance in the future.
   As a cloud operator how do I take advantage of that information? As a
   cloud consumer, what is the benefit? Today OpenStack supports both
   uses cases, except it can't request an Instance for the future.
  
  
   Again, that's not only reserving an instance, but rather a complex mix
   of
   resources. At the moment, we do provide way to reserve virtual
 instances
   by
   shelving/unshelving them at the lease start, but we also give
   possibility to
   provide dedicated compute hosts. Considering it, the logic of resource
   allocation and scheduling (take the word as resource planner, in order
   not
   to confuse with Nova's scheduler concerns) and capacity planning is
 too
   big
   to fail under the Compute's umbrella, as it has been agreed within the
   Summit talks and periodical threads.
 
  Capacity planning not falling under Compute's umbrella is news to me,
  are you referring to Gantt and scheduling in general? Perhaps I don't
  fully understand the full extent of what 'capacity planning' actually
  is.
 
  
   From the user standpoint, there are multiple ways to integrate with
   Climate
   in order to get Capacity Planning capabilities. As you perhaps
 noticed,
   the
   workflow for reserving resources is different from one plugin to
   another.
   Either we say the user has to explicitly request for dedicated
 resources
   (using Climate CLI, see dedicate compute hosts allocation), or we
   implicitly
   integrate resource allocation from the Nova API (see virtual instance
   API
   hook).
 
  I don't see how Climate reserves resources is relevant to the user.
 
  
   We truly accept our current implementation as a first prototype, where
   scheduling decisions can be improved (possibly thanks to some tight
   integration with a future external Scheduler aaS, hello Gantt), where
   also
   resource isolation and preemption must also be integrated with
   subprojects
   (we're currently seeing how to provision Cinder volumes and Neutron
   routers
   and nets), but anyway we still think there is a (IMHO big) room for
   resource
   and capacity management on its own project.
  
   Hoping it's clearer now,
 
  Unfortunately that doesn't clarify things for me.
 
  From the user's point of view what is the benefit from making a
  reservation in the future? Versus what Nova supports today, asking for
  an instance in the present.
 
  Same thing from the operator's perspective,  what is the benefit of
  taking reservations for the future?
 
  This whole model is unclear to me because as far as I can tell no
  other clouds out there support this model, so I have nothing to
  compare it to.
 
 
  Hi Joe,
  I think it's meant to save consumers money by pricing instances based on
  today's prices.
 
  https://aws.amazon.com/ec2/purchasing-options/reserved-instances/


 The reserved concept in Amazon, is very different then the one
 proposed here. The amazon concept doesn't support saying I will need
 an 

Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-03-03 Thread Robert Collins
On 3 March 2014 23:12, Thierry Carrez thie...@openstack.org wrote:
 James Slagle wrote:
 I'd like to ask that the following repositories for TripleO be included
 in next week's cutting of icehouse-3:

 http://git.openstack.org/openstack/tripleo-incubator
 http://git.openstack.org/openstack/tripleo-image-elements
 http://git.openstack.org/openstack/tripleo-heat-templates
 http://git.openstack.org/openstack/diskimage-builder
 http://git.openstack.org/openstack/os-collect-config
 http://git.openstack.org/openstack/os-refresh-config
 http://git.openstack.org/openstack/os-apply-config

 Are you willing to run through the steps on the How_To_Release wiki for
 these repos, or should I do it next week? Just let me know how or what
 to coordinate. Thanks.

 I looked into more details and there are a number of issues as TripleO
 projects were not really originally configured to be released.

 First, some basic jobs are missing, like a tarball job for
 tripleo-incubator.

Do we need one? tripleo-incubator has no infrastructure to make
tarballs. So that has to be created de novo, and its not really
structured to be sdistable - its a proving ground. This needs more
examination. Slagle could however use a git branch effectively.

 Then the release scripts are made for integrated projects, which follow
 a number of rules that TripleO doesn't follow:

 - One Launchpad project per code repository, under the same name (here
 you have tripleo-* under tripleo + diskimage-builder separately)

Huh? diskimage-builder is a separate project, with a separate repo. No
conflation. Same for os-*-config, though I haven't made a LP project
for os-cloud-config yet (but its not a dependency yet either).

 - The person doing the release should be a driver (or release
 manager) for the project (here, Robert is the sole driver for
 diskimage-builder)

Will fix.

 - Projects should have an icehouse series and a icehouse-3 milestone

James should be able to do this once we have drivers fixed up.

 Finally the person doing the release needs to have push annotated tags
 / create reference permissions over refs/tags/* in Gerrit. This seems
 to be missing for a number of projects.

We have this for all the projects we release; probably not incubator
because *we don't release it*- and we had no intent of doing releases
for tripleo-incubator - just having a stable branch so that there is a
thing RH can build rpms from is the key goal.

 In all cases I'd rather limit myself to incubated/integrated projects,
 rather than extend to other projects, especially on a busy week like
 feature freeze week. So I'd advise that for icehouse-3 you follow the
 following simplified procedure:

 - Add missing tarball-creation jobs
 - Add missing permissions for yourself in Gerrit
 - Skip milestone-proposed branch creation
 - Push tag on master when ready (this will result in tarballs getting
 built at tarballs.openstack.org)

 Optionally:
 - Create icehouse series / icehouse-3 milestone for projects in LP
 - Manually create release and upload resulting tarballs to Launchpad
 milestone page, under the projects that make the most sense (tripleo-*
 under tripleo, etc)

 I'm still a bit confused with the goals here. My original understanding
 was that TripleO was explicitly NOT following the release cycle. How
 much of the integrated projects release process do you want to reuse ?
 We do a feature freeze on icehouse-3, then bugfix on master until -rc1,
 then we cut an icehouse release branch (milestone-proposed), unfreeze
 master and let it continue as Juno. Is that what you want to do too ? Do
 you want releases ? Or do you actually just want stable branches ?

This is the etherpad:
https://etherpad.openstack.org/p/icehouse-updates-stablebranches -
that captures our notes from the summit.

TripleO has a whole is not committing to stable maintenance nor API
service integrated releases as yet: tuskar is our API service which
will follow that process next cycle, but right now it has its guts
open undergoing open heart surgery. Everything else we do semver on -
like the openstack clients (novaclient etc) - and our overall process
is aimed at moving things from incubator into stable trees as they
mature. We'll be stabilising the interfaces in tripleo-heat-templates
and tripleo-image-elements somehow in future too - but we don't have
good answers to some aspects there yet.

BUT

We want to support members of the TripleO community that are
interested in shipping stable editions of TripleO even while it still
building up to being a product, which James is leading the effort on -
so we need to find reasonable compromises on areas of friction in the
interim.

Cheers,
Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Vishvananda Ishaya

On Mar 3, 2014, at 6:48 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
 Having done some work with MySQL (specifically around similar data
 sets) and discussing the changes with some former coworkers (MySQL
 experts) I am inclined to believe the move from varchar  to binary
 absolutely would increase performance like this.
 
 
 However, I would like to get some real benchmarks around it and if it
 really makes a difference we should get a smart UUID type into the
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
 conversation. Should be split off from the keystone one at hand - I
 don't want valuable information / discussions to get lost.
 
 No disagreement on either point. However, this should be done after the
 standardization to a UUID user_id in Keystone, as a separate performance
 improvement patch. Agree?
 
 Best,
 -jay

-1

The expectation in other projects has been that project_ids and user_ids are 
opaque strings. I need to see more clear benefit to enforcing stricter typing 
on these, because I think it might break a lot of things.

Vish



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 10:43 AM, Sean Dague s...@dague.net wrote:
 On 03/03/2014 01:35 PM, Joe Gordon wrote:
 On Mon, Mar 3, 2014 at 10:01 AM, Zane Bitter zbit...@redhat.com wrote:
 On 03/03/14 12:32, Joe Gordon wrote:

 - if you're reserving resources far before you'll need them, it'll be
 cheaper

 Why? How does this save a provider money?


 If an operator has zero information about the level of future demand, they
 will have to spend a *lot* of money on excess capacity or risk running out.
 If an operator has perfect information about future demand, then they need
 spend no money on excess capacity. Everywhere in between, the amount of
 extra money they need to spend is a non-increasing function of the amount of
 information they have. QED.

 Sure. if an operator has perfect information about future demand they
 won't need any excess capacity. But assuming you know some future
 demand, how do you figure out how much of the future demand you know?
 But sure I can see this as a potential money saver, but unclear by how
 much. The Amazon model for this is a reservation is at minimum a year,
 I am not sure how useful short term reservations would be in
 determining future demand.

 There are other useful things with reservations though. In a private
 context the classic one is running number for close of business. Or
 software team that's working towards a release might want to preallocate
 resources for longer scale runs during a particular week.

Why can't they pre-allocate now?


 Reservation can really be about global policy giving some tenants more
 priority in getting resources than others (because you pre-allocate them).

 I also know that with a lot of the HPC teams using OpenStack, this is a
 fundamental part of scheduling. Not just the when, but the how long.
 Having systems automatically get reaped after a certain amount of time
 is something they very much want.

Agreed, I think this should either be part of Nova or Heat directly.


 So I think the general idea have merrit. I just think we need to make
 sure it integrates well with the rest of OpenStack, which I believe
 means strong coupling to the scheduler.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Mark Washenberger
On Sat, Mar 1, 2014 at 12:51 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-02-28 at 15:25 -0800, Mark Washenberger wrote:
  I believe we have some agreement here. Other openstack services should
  be able to use a strongly typed identifier for users. I just think if
  we want to go that route, we probably need to create a new field to
  act as the proper user uuid, rather than repurposing the existing
  field. It sounds like many existing LDAP deployments would break if we
  repurpose the existing field.

 Hi Mark,

 Please see my earlier response on this thread. I am proposing putting
 external identifiers into a mapping table that would correlate a
 Keystone UUID user ID with external identifiers (of variable length).


The thing you seem to be missing is that the current user-id attribute is
an external identifier depending on the identity backend you're using
today. For example in the LDAP driver it is the CN by default (which is
ridiculous for a large number of reasons, but let's leave those aside.) So
if you want to create a new, strongly typed internal uuid identifier that
makes the db performance scream, more power to you. But it's going to have
to be a new field.




 Once authentication has occurred (with any auth backend including LDAP),
 Keystone would only communicate to the other OpenStack services the UUID
 user ID from Keystone. This would indeed require a migration to each
 non-Keystone service that stores the user IDs as-is from Keystone
 currently (such as Glance or Nova).

 Once the migrations are run, then only UUID values would be stored, and
 further migrations could be run that would streamline the columns that
 stores these user IDs to a more efficient CHAR(32) or BINARY(16)
 internal storage format.

 Hope that clears things up.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-03 Thread David Peraza
Thanks Khanh,

I see the potential issue with using threads. Thanks for pointing out. On using 
containers, that sounds like a cool configuration but that should have a bigger 
footprint on the host resources than just a separate service instance like I'm 
doing. I have to admit that 100 fake computes per physical host is good though. 
How big is your physical host. I'm running a 4 Gig 4 CPU VM. I suspect your 
physical system is much more equipped. 

Regards,
David Peraza | Openstack Solutions Architect
david_per...@persistentsys.com | Cell: (305)766-2520 
Persistent Systems Inc. | Partners in Innovation | www.persistentsys.com

-Original Message-
From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com] 
Sent: Tuesday, February 25, 2014 3:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for 
scheduler testing

  I could do that but I think I need to be able to scale more without 
  the need to use this much resources. I will like to simulate a cloud 
  of 100 maybe
  1000 compute nodes that do nothing (Fake driver) this should not 
  take this much memory. Anyone knows of a more efficient way to  
  simulate many computes? I was thinking changing the Fake driver to 
  report many compute services in different threads instead of having 
  to spawn a process per compute service. Any other ideas?

I'm not sure using threads is a good idea. We need a dedicated resources pool 
for each compute. If the threads share the same resources pool, then every new 
VM will change the available resources on all computes, which may lead to 
unexpected  unpredicted scheduling result. For instance, RamWeigher may return 
the same compute twice instead of spreading, because at each time it finds out 
that the computes have the same free_ram.

Using compute inside LXC, I created 100 computes per physical host. Here is 
what I did, it's very simple:
 -  Creating a LXC with logical volume
  - Installing a fake nova-compute inside the LXC
  - Make a booting script that modifies its nova.conf to use its IP address  
starts nova-compute
  - Using the LXC above as the master, clone as many compute as you like!

(Note that while cloning the LXC, the nova.conf is copied with the former's IP 
address, that's why we need the booting script.)

Best regards,

Toan


 -Message d'origine-
 De : David Peraza [mailto:david_per...@persistentsys.com]
 Envoyé : lundi 24 février 2014 21:13
 À : OpenStack Development Mailing List (not for usage questions) Objet 
 : Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

 Thanks John,

 I also think it is a good idea to test the algorithm at unit test 
 level,
but I will like
 to try out over amqp as well, that is, we process and threads talking 
 to
each
 other over rabbit or qpid. I'm trying to test out performance as well.

 Regards,
 David Peraza

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, February 24, 2014 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

 On 24 February 2014 16:24, David Peraza 
 david_per...@persistentsys.com
 wrote:
  Hello all,
 
  I have been trying some new ideas on scheduler and I think I'm 
  reaching a resource issue. I'm running 6 compute service right on my 
  4 CPU 4 Gig VM, and I started to get some memory allocation issues.
  Keystone and Nova are already complaining there is not enough memory.
  The obvious solution to add more candidates is to get another VM and
set
 another 6 Fake compute service.
  I could do that but I think I need to be able to scale more without 
  the need to use this much resources. I will like to simulate a cloud 
  of 100 maybe
  1000 compute nodes that do nothing (Fake driver) this should not 
  take this much memory. Anyone knows of a more efficient way to  
  simulate many computes? I was thinking changing the Fake driver to 
  report many compute services in different threads instead of having 
  to spawn a process per compute service. Any other ideas?

 It depends what you want to test, but I was able to look at tuning the
filters and
 weights using the test at the end of this file:

https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_cachin
g
 _scheduler.py

 Cheers,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 DISCLAIMER
 ==
 This e-mail may contain privileged and confidential information which 
 is
the
 property of Persistent Systems Ltd. It is intended only for the use of
the
 individual or entity to which it is addressed. If you are not the
intended recipient,
 you are not authorized to read, retain, copy, print, distribute or use
this
 

Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-03 Thread Sean Dague
So that definitely got lost in translation somewhere, and is about to
have us spam icehouse users with messages that make them think their
openstack cluster is going to burn to the ground. Is there proposed
reviews to set those defaults in projects up already?

Remember - WARN is a level seen by administrators, and telling everyone
they have silent data corruption is not a good default.

-Sean

On 03/03/2014 02:05 PM, Roman Podoliaka wrote:
 Hi all,
 
 This is just one another example of MySQL not having production ready
 defaults. The original idea was to force setting the SQL mode to
 TRADITIONAL in code in projects using oslo.db code when they are ready
 (unit and functional tests pass). So the warning was actually for
 developers rather than for users.
 
 Sync of the latest oslo.db code will make users able to set any SQL mode
 you like (default is TRADITIONAL now, so the warning is gone).
 
 Thanks,
 Roman
 
 On Mar 2, 2014 8:36 PM, John Griffith john.griff...@solidfire.com
 mailto:john.griff...@solidfire.com wrote:




 On Sun, Mar 2, 2014 at 7:42 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 Coming in at slightly less than 1 million log lines in the last 7 days:

 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=

 This application has not enabled MySQL traditional mode, which means
 silent data corruption may occur

 This is being generated by  *.openstack.common.db.sqlalchemy.session in
 at least nova, glance, neutron, heat, ironic, and savana


 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9


 At any rate, it would be good if someone that understood the details
 here could weigh in about whether is this really a true WARNING that
 needs to be fixed or if it's not, and just needs to be silenced.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net mailto:s...@dague.net / sean.da...@samsung.com
 mailto:sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I came across this earlier this week when I was looking at this in
 Cinder, haven't completely gone into detail here, but maybe Florian or
 Doug have some insight?

 https://bugs.launchpad.net/oslo/+bug/1271706

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Jay Pipes
On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote:
 On Mar 3, 2014, at 6:48 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
  Having done some work with MySQL (specifically around similar data
  sets) and discussing the changes with some former coworkers (MySQL
  experts) I am inclined to believe the move from varchar  to binary
  absolutely would increase performance like this.
  
  
  However, I would like to get some real benchmarks around it and if it
  really makes a difference we should get a smart UUID type into the
  common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
  conversation. Should be split off from the keystone one at hand - I
  don't want valuable information / discussions to get lost.
  
  No disagreement on either point. However, this should be done after the
  standardization to a UUID user_id in Keystone, as a separate performance
  improvement patch. Agree?
  
  Best,
  -jay
 
 -1
 
 The expectation in other projects has been that project_ids and user_ids are 
 opaque strings. I need to see more clear benefit to enforcing stricter typing 
 on these, because I think it might break a lot of things.

What does Nova lose here? The proposal is to have Keystone's user_id
values be UUIDs all the time. There would be a migration or helper
script against Nova's database that would change all non-UUID user_id
values to the Keystone UUID values.

If there's stuff in Nova that would break (which is doubtful,
considering like you say, these are supposed to be opaque values and
as such should not have any restrictions or validation on their value),
then that is code in Nova that should be fixed.

Honestly, we shouldn't accept poor or loose code just because stuff
might break.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-03 Thread David Peraza
Thanks John,

What I'm trying to do is to run an asynchronous task that pre-organizes the 
target hosts for an image. Then scheduler only need to read the top of the list 
or priority queue. We have a paper proposed for the summit that will explain 
the approach, hopefully it gets accepted so we can have a conversation on this 
at the summit. I suspect the DB overhead will go away if we try our approach. 
Still theory though, that is why I want to get a significant test environment 
to appreciate the performance better.

Regards,
David Peraza

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Tuesday, February 25, 2014 5:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for 
scheduler testing

On 24 February 2014 20:13, David Peraza david_per...@persistentsys.com wrote:
 Thanks John,

 I also think it is a good idea to test the algorithm at unit test level, but 
 I will like to try out over amqp as well, that is, we process and threads 
 talking to each other over rabbit or qpid. I'm trying to test out performance 
 as well.


Nothing beats testing the thing for real, of course.

As a heads up, the overheads of DB calls turned out to dwarf any algorithmic 
improvements I managed. There will clearly be some RPC overhead, but it didn't 
stand out as much as the DB issue.

The move to conductor work should certainly stop the scheduler making those 
pesky DB calls to update the nova instance. And then, improvements like 
no-db-scheduler and improvements to scheduling algorithms should shine through 
much more.

Thanks,
John


 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, February 24, 2014 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute 
 nodes for scheduler testing

 On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com 
 wrote:
 Hello all,

 I have been trying some new ideas on scheduler and I think I'm 
 reaching a resource issue. I'm running 6 compute service right on my 
 4 CPU 4 Gig VM, and I started to get some memory allocation issues.
 Keystone and Nova are already complaining there is not enough memory.
 The obvious solution to add more candidates is to get another VM and set 
 another 6 Fake compute service.
 I could do that but I think I need to be able to scale more without 
 the need to use this much resources. I will like to simulate a cloud 
 of 100 maybe
 1000 compute nodes that do nothing (Fake driver) this should not take 
 this much memory. Anyone knows of a more efficient way to  simulate 
 many computes? I was thinking changing the Fake driver to report many 
 compute services in different threads instead of having to spawn a 
 process per compute service. Any other ideas?

 It depends what you want to test, but I was able to look at tuning the 
 filters and weights using the test at the end of this file:
 https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_ca
 ching_scheduler.py

 Cheers,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 DISCLAIMER
 ==
 This e-mail may contain privileged and confidential information which is the 
 property of Persistent Systems Ltd. It is intended only for the use of the 
 individual or entity to which it is addressed. If you are not the intended 
 recipient, you are not authorized to read, retain, copy, print, distribute or 
 use this message. If you have received this communication in error, please 
 notify the sender and delete all copies of this message. Persistent Systems 
 Ltd. does not accept any liability for virus infected mails.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Joe Gordon
On Mon, Mar 3, 2014 at 10:49 AM, Russell Bryant rbry...@redhat.com wrote:
 On 03/03/2014 01:27 PM, Joe Gordon wrote:
 On Mon, Mar 3, 2014 at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:
 1) What about tasks?

 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.

 For example:

Accept: application/json;type=task

 The current https://wiki.openstack.org/wiki/APIChangeGuidelines says
 its OK add a new response header, so do we even need this?

 This is for the case that we want to actually change the response body.

  - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters

 Can you elaborate on this last point.

 We have previously required API extensions for adding some things like
 input parameters or attributes on a resource.  The addition of
 versioning for extensions allow us to do this without adding extensions.
  The point is just that it would be nice if we can mark extensions in
 this category as deprecated and possibly removed since we can express
 these things in terms of versions instead.

That's what I thought just making sure.



 3) Core versioning

 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.

 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.

 ++, looks like we may need to update
 https://wiki.openstack.org/wiki/APIChangeGuidelines and make this
 clear to downstream users.

 Right, just shooting for some agreement first.


 4) API Proxying

 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.

 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.

 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.

 Can you expand on what this last paragraph means?

 While I agree in not breaking users. I assume this means we won't
 accept any new proxy APIs?

 If proxying is required to make an existing API continue to work, we
 should accept it.

Agreed,  but we should accept a new API call that is just a proxy
(unless its EC2 ...)


 6) Response Code Consistency and Correctness

 The v2 API has many places where the response code returned for a given
 operation is not strictly correct. For example a 200 is returned when a
 202 would be more appropriate. Correcting these issues should be
 considered for improving the future use of the API, however there does
 not seem to be any support for considering this a critical problem right
 now. There are two approaches that can be taken to improve this in v2:

 Just fix them. Right now, we return some codes that imply we have dealt
 with a request, when all we have done is queue it for processing (and
 vice versa). In the future, we may change the backend in such a way that
 a return code needs to change to continue to be accurate anyway. If we
 just assume that return codes may change to properly reflect the action
 that was taken, then we can correct existing errors and move on.

 Changing return codes always scares me, we risk breaking code that
 says if '==200.' Although having versioned backwards compatable APIs
 makes this a little better.

 See below ...

 Accept them as wrong but not critically so. With this approach, we can
 strive for correctness in the future without changing behavior of our
 existing APIs. Nobody seems to complain about them right now, so
 changing them seems to be little gain. If the client begins exposing a
 version header (which we need for other things) then we could
 alternately start returning accurate codes for those clients.

 Wait what? client needs version 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Jay Pipes
On Mon, 2014-03-03 at 11:18 -0800, Mark Washenberger wrote:
 
 
 
 On Sat, Mar 1, 2014 at 12:51 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Fri, 2014-02-28 at 15:25 -0800, Mark Washenberger wrote:
  I believe we have some agreement here. Other openstack
 services should
  be able to use a strongly typed identifier for users. I just
 think if
  we want to go that route, we probably need to create a new
 field to
  act as the proper user uuid, rather than repurposing the
 existing
  field. It sounds like many existing LDAP deployments would
 break if we
  repurpose the existing field.
 
 
 Hi Mark,
 
 Please see my earlier response on this thread. I am proposing
 putting
 external identifiers into a mapping table that would correlate
 a
 Keystone UUID user ID with external identifiers (of variable
 length).
 
 
 The thing you seem to be missing is that the current user-id attribute
 is an external identifier depending on the identity backend you're
 using today. For example in the LDAP driver it is the CN by default
 (which is ridiculous for a large number of reasons, but let's leave
 those aside.) So if you want to create a new, strongly typed internal
 uuid identifier that makes the db performance scream, more power to
 you. But it's going to have to be a new field.

No, it won't. The proposal is to create a new table that maps a UUID
value to the external identifier value.

The migration would essentially do this pseudocode:

recs = get_identity_records()
for rec in recs:
  if not is_like_uuid(rec.user_id):
  external_id = rec.user_id
  new_uuid = uuid.uuid4()
  insert_into_external_ids(new_uuid, external_id)
  rec.user_id = new_uuid
  save_identity_record(rec)

Cascading updates on the user_id field to its child relations will take
care of the changes in user_id column in foriegn keys.

A migration script in, say, Nova, would do something like:

UPDATE instances SET owner_id = ext.user_id
FROM instances JOIN keystone.external_id_mapping ext
ON instances.owner_id = ext.external_id;

Best,
-jay 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting Tuesday 4 March - 19:00 CST

2014-03-03 Thread Brian Curtin
Reminder that tomorrow we're back on the meeting schedule after having
last week off. Extra special note that the meeting is moved up a day
to Tuesday instead of being on a Wednesday last time.

https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting

Date/Time: Tuesday 4 March - 19:00 UTC / 1pm CST

IRC channel: #openstack-meeting-3

Meeting Agenda:
https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK

About the project:
https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

If you have questions, all of us lurk in #openstack-sdks on freenode!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Jay Pipes
-1 from me. Sounds like a way to avoid badly needed change and
innovation in the API. When, for example, would we be able to propose a
patch that removed API extensions entirely?

The inconsistent naming, capitalization, numerous worthless or pointless
API extensions, ability to do similar or identical things in different
ways, and the incorrect result/response codes makes the Nova API look
immature and clownish. I'd love to see a competing proposal to this that
looks towards the future and a day when we can be proud of the Compute
API as a real differentiator vs. the EC2 API. As it stands, the v2 REST
API just makes OpenStack Compute look subpar at best, IMO.

Feel free to accuse me of wanting my cake and eating it, too. I guess
I'm just both hungry and impatient.

Best,
-jay

On Mon, 2014-03-03 at 12:32 -0500, Russell Bryant wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.
 
 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.
 
 1) What about tasks?
 
 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.
 
 For example:
 
Accept: application/json;type=task
 
 2) Versioning extensions
 
 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:
 
  - Add a version number to v2 API extensions
  - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
  - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters
 
 3) Core versioning
 
 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.
 
 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.
 
 4) API Proxying
 
 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.
 
 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.
 
 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.
 
 5) Capitalization and Naming Consistency
 
 Some of the changes in the v3 API included changes to capitalization and
 naming for improved consistency.  If we stick with v2 only, we will not
 be able to make any of these changes.  However, we believe that not
 breaking any existing clients and not having to maintain a second API is
 worth not making these changes, or supporting some indirection to
 achieve this for newer clients if we decide it is important.
 
 6) Response Code Consistency and Correctness
 
 The v2 API has many places where the response code returned for a given
 operation is not strictly correct. For example a 200 is returned when a
 202 would be more appropriate. Correcting these issues should be
 considered for improving the future use of the API, however there does
 not seem to be any support for considering this a critical problem right
 now. There are two approaches that can be taken to improve this in v2:
 
 Just fix them. Right now, we return some codes that imply we have dealt

Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-03 Thread Jay Pipes
On Mon, 2014-03-03 at 14:25 -0500, Sean Dague wrote:
 So that definitely got lost in translation somewhere, and is about to
 have us spam icehouse users with messages that make them think their
 openstack cluster is going to burn to the ground. Is there proposed
 reviews to set those defaults in projects up already?
 
 Remember - WARN is a level seen by administrators, and telling everyone
 they have silent data corruption is not a good default.

++. All that needs to happen is checking to see if the MySQL sql_mode is
already set at a traditional or stricter level before issuing the
warning message.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] boot server with attached network

2014-03-03 Thread Diane Fleming
Hi all,

I'm working on a bug to fix the documentation for POST /servers.

Apparently, you can attach one or more networks to a server when you initially 
boot it, but the networks element and associated attributes are not 
documented here: http://api.openstack.org/api-ref-compute.html

Anyway, I have a review out - https://review.openstack.org/#/c/77229/ - and I 
have several questions:


  1.  It's not clear to me whether fixed_ip is a valid parameter. If it is, 
what exactly does it do?
  2.  I understand that uuid is valid with a nova-network only, while port 
is valid for a neutron network/port. Is that correct?
  3.  uuid and port are mutually exclusive, but one or the other is required – 
is that right?
  4.  Any other details I should know about?

Can anyone look at this review and comment on these items?

thanks,

Diane
--
Diane Fleming
Software Developer II - US
diane.flem...@rackspace.commailto:diane.flem...@rackspace.com
Cell  512.323.6799
Office 512.874.1260
Skype drfleming0227
Google-plus diane.flem...@gmail.commailto:diane.flem...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-03 Thread David Ripton

On 03/02/2014 09:42 PM, Sean Dague wrote:

Coming in at slightly less than 1 million log lines in the last 7 days:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=

This application has not enabled MySQL traditional mode, which means
silent data corruption may occur

This is being generated by  *.openstack.common.db.sqlalchemy.session in
at least nova, glance, neutron, heat, ironic, and savana

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9


At any rate, it would be good if someone that understood the details
here could weigh in about whether is this really a true WARNING that
needs to be fixed or if it's not, and just needs to be silenced.


oslo-incubator commit a5841668 just got in, and made traditional the 
default.  Of course it'll take a while for that to be copied everywhere. 
 Once it is, this problem should mostly go away.


But, yeah, we should probably remove that warning, since if we let users 
override the default, we shouldn't fill their log partition for doing so.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Third Party CI] Reminder: Workshop/QA meeting on #openstack-meeting today at 13:00EST/18:00 UTC

2014-03-03 Thread Jay Pipes
On Mon, 2014-03-03 at 16:53 +, trinath.soman...@freescale.com wrote:
 Hi Jay-
 
 I have the following doubts with my CI setup.
 
 Hope presenting them before the meeting might help me with some more 
 guidance. 
 
 [1] sandbox-dvsm-tempest-full runs all the test cases where few fail, causing 
 the completed build to fail. 
 How to restrict the testcases?
 Please find the errors.txt as an attachment to this email. This time VM 
 (cirros) failed. 
 
 [2] After the build is failed, still services like nova, swift are running. 
 Does Jenkins using devstack unstack the stack.
 
 [3] For some tempest test cases, I get the message Neutron skipped but in 
 the devstack logs, neutron is installed from git. Do I need to configure 
 something more for neutron.
 
 [4] For services like Neutron, do we need to test cinder, nova, heat, swift 
 etc.. too ??

All: For reference, we answered the above questions on an etherpad here:

https://etherpad.openstack.org/p/third-party-ci-workshop

We will continue to add information to the above etherpad in the coming
weeks.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-03 Thread Sergey Skripnick
David Peraza david_per...@persistentsys.com писал(а) в своём письме Mon,  
03 Mar 2014 21:27:01 +0200:




Using compute inside LXC, I created 100 computes per physical host. Here  
is what I did, it's very simple:

 -  Creating a LXC with logical volume
  - Installing a fake nova-compute inside the LXC
  - Make a booting script that modifies its nova.conf to use its IP  
address  starts nova-compute

  - Using the LXC above as the master, clone as many compute as you like!

(Note that while cloning the LXC, the nova.conf is copied with the  
former's IP address, that's why we need the booting script.)





Also btrfs can be used to reduce disk space usage. Scenario is looks like  
this:


* create btrfs filesystem on /var/lib/lxc
* create first container with btrfs backingstore (lxc-create -B btrfs ...)
* setup devstack with fakevirt
* stop container, and make N snapshotted clones (lxc-clone --snapshot ...)
* start containers and run booting script (here script used by Rally [0])

If you do not want to do all this manually, there is  
MultihostEngine/LxcEngine

coming soon in Rally [1]

[0]  
https://review.openstack.org/#/c/56222/25/rally/deploy/engines/lxc/start.sh

[1] https://wiki.openstack.org/wiki/Rally

--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Russell Bryant
On 03/03/2014 02:59 PM, Jay Pipes wrote:
 -1 from me. Sounds like a way to avoid badly needed change and
 innovation in the API. When, for example, would we be able to propose a
 patch that removed API extensions entirely?

v3 didn't address this at all, anyway.  I'm not even sure there's
consensus on it.  In any case, I think it's tangential and only relevant
if we're starting with a new API.

 The inconsistent naming, capitalization, numerous worthless or pointless
 API extensions, ability to do similar or identical things in different
 ways, and the incorrect result/response codes makes the Nova API look
 immature and clownish. I'd love to see a competing proposal to this that
 looks towards the future and a day when we can be proud of the Compute
 API as a real differentiator vs. the EC2 API. As it stands, the v2 REST
 API just makes OpenStack Compute look subpar at best, IMO.

Much of what you discuss is addressed in the document.

I think the differentiation that you want comes from a new ground-up
API, and not what's being discussed here (v3, or further evolution of v2).

 Feel free to accuse me of wanting my cake and eating it, too. I guess
 I'm just both hungry and impatient.

Not that, exactly ... just ignoring some of the practicalities at hand,
I think.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate release impending

2014-03-03 Thread David Ripton

On 02/28/2014 04:22 PM, Matt Riedemann wrote:



On 2/26/2014 11:34 AM, Sean Dague wrote:

On 02/26/2014 11:24 AM, David Ripton wrote:

I'd like to release a new version of sqlalchemy-migrate in the next
couple of days.  The only major new feature is DB2 support.  If anyone
thinks this is a bad time, please let me know.



So it would be nice if someone could actually work through the 0.9 sqla
support, because I think it's basically just a change in quoting
behavior that's left (mostly where quoting gets called) -
https://review.openstack.org/#/c/66156/


Thomas Goirand has a bunch of new reviews up in this area.  Hopefully we 
can release a sqlalchemy-migrate 0.9 with support for sqlalchemy-0.9 soon.



Looks like the 0.8.3 tag is up so it's just a matter of time before it
shows up on pypi?

https://review.openstack.org/gitweb?p=stackforge/sqlalchemy-migrate.git;a=commit;h=21fcdad0f485437d010e5743626c63ab3acdaec5


I pushed a non-signed 0.8.3 tag, and the release machinery only cares 
about signed tags.  Oops.


Anyway, I then pushed a signed 0.8.4 tag, and the new release failed 
tests because of an unconditional import of ibm_db_sa, a module which 
did not exist on the test machines.  (It was mentioned in 
test-requirements.txt, but devstack / devstack-gate don't actually use 
that file.)


So we did a 0.8.5 release without DB2 support, so that the 
latest/default version of sqlalchemy-migrate would work again.


There are patches in progress to fix the DB2 support and to add a 
tempest job for sqlalchemy-migrate so we're more likely to notice 
problems before doing a release.  When that work is all done, we can 
release 0.8.6.  (Or maybe skip 0.8.6 and just release 0.9, if the 
sqlalchemy 0.9 compatibility work is all done by then.)


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-03 Thread Joe Gordon
Overall I think Climate is trying to address some very real use cases,
but its unclear to me where these solutions should live or how to
solve them. Furthermore I understand what a reservation means for nova
but I am not sure what it means in Cinder, Swift etc.

To give a few examples:
* I think nova should natively support booting an instance for a
limited amount of time. I would use this all the time to boot up
devstack instances (boot devstack instance for 5 hours)
* Reserved and Spot Instances. I like Amazon's concept of reserved and
spot instances it would be cool if we could support something similar
* Boot an instances for 4 hours every morning. This sounds like
something that 
https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron
can handle.
* Give someone 100 CPU hours per time period of quota. Support quotas
by overall usage not current usage. This sounds like something that
each service should support natively.
* Reserved Volume: Not sure how that works.
* Virtual Private Cloud.  It would be great to see OpenStack support a
hardware isolated virtual private cloud, but not sure what the best
way to implement that is.
* Capacity Planning. Sure, but it would be nice to see a more fleshed
out story for it.


It would be nice to see more of these use cases discussed.


On Mon, Mar 3, 2014 at 11:16 AM, Joe Gordon joe.gord...@gmail.com wrote:
 On Mon, Mar 3, 2014 at 10:43 AM, Sean Dague s...@dague.net wrote:
 On 03/03/2014 01:35 PM, Joe Gordon wrote:
 On Mon, Mar 3, 2014 at 10:01 AM, Zane Bitter zbit...@redhat.com wrote:
 On 03/03/14 12:32, Joe Gordon wrote:

 - if you're reserving resources far before you'll need them, it'll be
 cheaper

 Why? How does this save a provider money?


 If an operator has zero information about the level of future demand, they
 will have to spend a *lot* of money on excess capacity or risk running out.
 If an operator has perfect information about future demand, then they need
 spend no money on excess capacity. Everywhere in between, the amount of
 extra money they need to spend is a non-increasing function of the amount 
 of
 information they have. QED.

 Sure. if an operator has perfect information about future demand they
 won't need any excess capacity. But assuming you know some future
 demand, how do you figure out how much of the future demand you know?
 But sure I can see this as a potential money saver, but unclear by how
 much. The Amazon model for this is a reservation is at minimum a year,
 I am not sure how useful short term reservations would be in
 determining future demand.

 There are other useful things with reservations though. In a private
 context the classic one is running number for close of business. Or
 software team that's working towards a release might want to preallocate
 resources for longer scale runs during a particular week.

 Why can't they pre-allocate now?


 Reservation can really be about global policy giving some tenants more
 priority in getting resources than others (because you pre-allocate them).

 I also know that with a lot of the HPC teams using OpenStack, this is a
 fundamental part of scheduling. Not just the when, but the how long.
 Having systems automatically get reaped after a certain amount of time
 is something they very much want.

 Agreed, I think this should either be part of Nova or Heat directly.


 So I think the general idea have merrit. I just think we need to make
 sure it integrates well with the rest of OpenStack, which I believe
 means strong coupling to the scheduler.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework

2014-03-03 Thread Jay Pipes
On Fri, 2014-02-28 at 10:19 +0400, Eugene Nikanorov wrote:
 
 1) I'm not entirely sure that a provider attribute is even
 necessary to
 expose in any API. What is important is for a scheduler to
 know which
 drivers are capable of servicing a set of attributes that are
 grouped
 into a flavor.
 Well, provider becomes read-only attribute and for admin only (jsut to
 see which driver actually handles the resources), not too much of API
 visibility. 

I'd very much prefer to keep the provider/driver name out of the public
API entirely. I don't see how it is needed.

 2) I would love to see the use of the term flavor banished
 from
 OpenStack APIs. Nova has moved from flavors to instance
 types, which
 clearly describes what the thing is, without the odd
 connotations that
 the word flavor has in different languages (not to mention
 the fact
 that flavor is spelled flavour in non-American English).
 
 How about using the term load balancer type, VPN type, and
 firewall
 type instead?
 Oh... I don't have strong opinion on the name of the term.
 Flavor was used several time in our discussions and is short.
 *Instance* Type however seems also fine. Another option is probably
 a Service Offering.

OK.

 3) I don't believe the FlavorType (public or internal)
 attribute of the
 flavor is useful. We want to get away from having any
 vendor-specific
 attributes or objects in the APIs (yes, even if they are
 hidden from
 the normal user). See point #1 for more about this. A
 scheduler should
 be able to match a driver to a request simply by matching the
 set of
 required capabilities in the requested flavor (load balancer
 type) to
 the set of capabilities advertised by the driver.
 ServiceType you mean? If you're talking about ServiceType then it
 mostly for the user to filter flavors (I'm using short term for now)
 by service type. Say, when user wants to create new loadbalancer,
 horizon will show only flavors related to the lb.
 That could be also solved by having different names live you suggested
 above: Lb type, VPN type, etc. 
 On other hand that would be similar objects with different names -
 does it make much sense?

No, I wasn't referring to ServiceType. I was referring to FlavorType
-- public or internal -- in your diagram. I don't believe this is
necessary, frankly.

 I'm not sure what you think 'vendor-specific' attributes are, I don't
 remember to have plan of exposing any kind of vendor-related
 parameters. The parameters that flavor represents are capabilities of
 the service in terms that user care about: latency, throughput,
 topology, technology, etc.

Yes, agreed. ++

 4) A minor point... I think it would be fine to group the
 various
 types into a single database table behind the scenes (like
 you have in
 the Object model section). However, I think it is useful to
 have the
 public API expose a /$servie-types resource endpoint for each
 service
 itself, instead of a generic /types (or /flavors) endpoint.
 So, folks
 looking to set up a load balancer would call
 GET /balancer-types, or
 call neutron balancer-type-list, instead of calling
 GET /types?service=load-balancer or neutron flavor-list
 --service=load-balancer
 I'm fine with this suggestion.
  
 
 5) In the section on Scheduling, you write Scheduling is a
 process of
 choosing provider and a backend for the resource. As
 mentioned above, I
 think this could be changed to something like this:
 Scheduling is a
 process of matching the set of requested capabilities -- the
 flavor
 (type) -- to the set of capabilities advertised by a driver
 for the
 resource. That would put Neutron more in line with how Nova
 handles
 this kind of thing.
 I agree, I actually meant this and nova example is how I think it
 should work.
 But more important is what is the result of scheduling.
 We discussed that yesterday with Mark and I think we got so the point
 where we could not find agreement for now.
 In my opinion the result of scheduling is binding resource to the
 driver (at least)
 So further calls to the resource go to the same driver because of that
 binding.
 That's pretty much the same how agent scheduling works.
  
 By the way I'm thinking about getting rid of 'provider' term and using
 'driver' instead. Currently 'provider' is just a user-facing
 representation of the driver. Once we introduce flavors/service
 types/etc, we can use term 'driver' for implementation means.

++

Best,
-jay



___
OpenStack-dev 

Re: [openstack-dev] [Neutron] Flavor Framework

2014-03-03 Thread Jay Pipes
On Fri, 2014-02-28 at 01:31 -0800, Gary Duan wrote:
 What are the parameters that will be part of flavor definition? As I
 am thinking of it now, the parameter could be performance and capacity
 related, for example, throughput, max. session number and so on; or
 capability related, for example, HA, L7 switching.

I would think the latter -- a capabilities-based descriptor of the
service offering.

 Compared to # of CPU and memory size in Nova flavor, these parameters
 don't seem to have exact definitions across different implementations.
 Or, you think it is not something we need worry about. It's totally
 operator's decision of how to rate different drivers?

I would think it would be fairly easy to create some basic load balancer
types, and then let deployers define other ones, similar to how Nova's
default flavors work.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] integration with logstash

2014-03-03 Thread Kurt Griffiths
Here’s an interesting hack. People are getting creative in the way they use 
Marconi (this patch uses Rackspace’s deployment of Marconi).

https://github.com/paulczar/logstash-contrib/commit/8bfe93caf1c66d94690e9d9c2ecf9ee6b458b1d9

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Vishvananda Ishaya
This seems like a reasonable and well thought out approach but It feels
like we are removing our ability to innovate. I know we are worried about
maintaining multiple APIs, but I’m still leaning towards putting the v3 
API out and just keeping v2 around for a long time. Yes, its a maintenance
burden, but if we aren’t adding a lot of features to v2, I don’t know if
it is really THAT bad.

I’m worried that this is just delaying solving the inconsistency issues to
some future date.

Vish

On Mar 3, 2014, at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:

 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.
 
 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.
 
 1) What about tasks?
 
 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.
 
 For example:
 
   Accept: application/json;type=task
 
 2) Versioning extensions
 
 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:
 
 - Add a version number to v2 API extensions
 - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
 - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters
 
 3) Core versioning
 
 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.
 
 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.
 
 4) API Proxying
 
 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.
 
 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.
 
 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.
 
 5) Capitalization and Naming Consistency
 
 Some of the changes in the v3 API included changes to capitalization and
 naming for improved consistency.  If we stick with v2 only, we will not
 be able to make any of these changes.  However, we believe that not
 breaking any existing clients and not having to maintain a second API is
 worth not making these changes, or supporting some indirection to
 achieve this for newer clients if we decide it is important.
 
 6) Response Code Consistency and Correctness
 
 The v2 API has many places where the response code returned for a given
 operation is not strictly correct. For example a 200 is returned when a
 202 would be more appropriate. Correcting these issues should be
 considered for improving the future use of the API, however there does
 not seem to be any support for considering this a critical problem right
 now. There are two approaches that can be taken to improve this in v2:
 
 Just fix them. Right now, we return some codes that imply we have dealt
 with a request, when all we have done is queue it for processing (and
 vice versa). In the future, we may change the backend in such a way that
 a return code needs to change to continue to be accurate anyway. If we
 just assume that return codes may change to properly reflect the 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Christopher Yeoh
On Mon, 03 Mar 2014 12:32:04 -0500
Russell Bryant rbry...@redhat.com wrote:
 
 The v3 API effort has produced a lot of excellent work.  However, the
 majority opinion seems to be that we should avoid the cost of
 maintaining two APIs if at all possible.  We should apply what has
 been learned to the existing API where we can and focus on making v2
 something that we can continue to maintain for years to come.

I'll respond to the technical points raised later, but I do
want to point out now that I think this is a false dichotomy. That it is
incorrect to say that the only way to avoid the cost of maintaining two
APIs is to keep only the V2 API. Especially when the discussion
then proceeds around being able to make backwards incompatible changes
(deprecation) - which is a doing a major version bump whether we call
it that or not and is keeping multiple versions around.

 We recognize and accept that it is a failure of Nova project
 leadership that we did not come to this conclusion much sooner.  We
 hope to have learned from the experience to help avoiding a situation
 like this happening again in the future.

The V3 API development work was encouraged and endorsed at both the
Havana and Icehouse (and at Icehouse most of the V3 API patches that
form the core of the API had already merged) and everyone had ample
opportunity to comment or object then. 

But what I think disappoints me most is the way that this discussion
has been approached. That the people who have done the most
work on both the V3 and V2 API code in the last couple of cycles were
not consulted first to see what could be done to address the concerns
around dual maintenance. And instead an assertion was made that all of
the work done for V3 would have to be dropped because of dual
maintenance concerns without even trying to measure or explain what
that dual maintenance overhead is.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Michael Still
I think its also pretty unfair on the people who put a lot of work
into the v3 API. We're seriously going to delete their code after they
put a year into it?

To me OpenStack isn't just the users, its also the development
community. I think we do measurable harm to that development community
by doing this. We're teaching people that having a blessed plan that
was discussed at a summit is not enough to reassure their management
chain that they're not wasting time developing something. That worries
me a lot.

I'm yet to see a third party library (fog, jclouds, etc) express
concern about a move to v3 if the deprecation cycle for v2 is nice and
long. So why are we so worried?

Michael

On Tue, Mar 4, 2014 at 8:25 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 This seems like a reasonable and well thought out approach but It feels
 like we are removing our ability to innovate. I know we are worried about
 maintaining multiple APIs, but I'm still leaning towards putting the v3
 API out and just keeping v2 around for a long time. Yes, its a maintenance
 burden, but if we aren't adding a lot of features to v2, I don't know if
 it is really THAT bad.

 I'm worried that this is just delaying solving the inconsistency issues to
 some future date.

 Vish

 On Mar 3, 2014, at 9:32 AM, Russell Bryant rbry...@redhat.com wrote:

 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.

 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.

 1) What about tasks?

 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.

 For example:

   Accept: application/json;type=task

 2) Versioning extensions

 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:

 - Add a version number to v2 API extensions
 - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
 - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters

 3) Core versioning

 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.

 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.

 4) API Proxying

 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.

 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.

 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.

 5) Capitalization and Naming Consistency

 Some of the changes in the v3 API included changes to capitalization and
 naming for improved consistency.  If we stick with v2 only, we will not
 be able to make any of these changes.  However, we believe that not
 breaking any existing clients and not having to maintain a second API is
 worth not making these changes, or supporting some indirection to
 achieve this for newer clients if we decide it is important.

 6) Response Code Consistency and Correctness

 The v2 API has many places 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-03 Thread Jay Pipes
On Mon, 2014-03-03 at 15:24 -0500, Russell Bryant wrote:
 On 03/03/2014 02:59 PM, Jay Pipes wrote:
  -1 from me. Sounds like a way to avoid badly needed change and
  innovation in the API. When, for example, would we be able to propose a
  patch that removed API extensions entirely?
 
 v3 didn't address this at all, anyway.  I'm not even sure there's
 consensus on it.  In any case, I think it's tangential and only relevant
 if we're starting with a new API.

I guess I am saying that if the ostensibly limited changes and
standardization of Chris and others' V3 API work has now been decided is
too risky to use or of not enough value to users, then the chance that a
brand new API version seeing the light of day is pretty low. And that is
disappointing to me. Feel free to tell me I'm nuts, though. I'd happily
exchange my skepticism for some healthy optimism.

  The inconsistent naming, capitalization, numerous worthless or pointless
  API extensions, ability to do similar or identical things in different
  ways, and the incorrect result/response codes makes the Nova API look
  immature and clownish. I'd love to see a competing proposal to this that
  looks towards the future and a day when we can be proud of the Compute
  API as a real differentiator vs. the EC2 API. As it stands, the v2 REST
  API just makes OpenStack Compute look subpar at best, IMO.
 
 Much of what you discuss is addressed in the document.

Well, it's discussed in the document... in as much to say we won't
really change any of these things...

 I think the differentiation that you want comes from a new ground-up
 API, and not what's being discussed here (v3, or further evolution of v2).

Yes, but my concern about this proposal is that it reinforces the status
quo and in so doing, slows down the pace of innovation at the API level.
And, a slow pace of API innovation in turn inhibits innovation at layers
of the code beneath the API.

  Feel free to accuse me of wanting my cake and eating it, too. I guess
  I'm just both hungry and impatient.
 
 Not that, exactly ... just ignoring some of the practicalities at hand,
 I think.

Fair enough. :)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >