Re: [openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-24 Thread Scott Devoid
So it turns out that fixing this issue is not very simple. It turns out
that there are stubbed out openstack.common.policy checks in the glance-api
code, which are pretty much useless because they do not use the image as a
target. [1] Then there's a chain of API / client calls where it's unclear
who is responsible for validating ownership: python-glanceclient ->
glance-api -> glance-registry-client -> glance-registry-api ->
glance.db.sqlalchemy.api. Add to that the fact that request IDs are not
consistently captured along the logging path [2] and it's a holy mess.

I am wondering...
1. Has anyone actually set "owner_is_tenant" to false? Has this ever been
tested?
2. From glance developers, what kind of permissions / policy scenarios do
you actually expect to work?

Right now we have one user who consistently gets an empty 404 back from
"nova image-list" because glance-api barfs on a single image and gives up
on the entire API request...and there are no non-INFO/DEBUG messages in
glance logs for this. >:-/

~ Scott

[1] https://bugs.launchpad.net/glance/+bug/1346648
[2] https://bugs.launchpad.net/glance/+bug/1336958

On Fri, Jul 11, 2014 at 12:26 PM, Scott Devoid  wrote:

> Hi Alexander,
>
> I read through the artifact spec. Based on my reading it does not fix this
> issue at all. [1] Furthermore, I do not understand why the glance
> developers are focused on adding features like artifacts or signed images
> when there are significant usability problems with glance as it currently
> stands. This is echoing Sean Dague's comment that bugs are filed against
> glance but never addressed.
>
> [1] See the **Sharing Artifact** section, which indicates that sharing may
> only be done between projects and that the tenant owns the image.
>
>
> On Thu, Jul 3, 2014 at 4:55 AM, Alexander Tivelkov  > wrote:
>
>> Thanks Scott, that is a nice topic
>>
>> In theory, I would prefer to have both owner_tenant and owner_user to be
>> persisted with an image, and to have a policy rule which allows to specify
>> if the users of a tenant have access to images owned by or shared with
>> other users of their tenant. But this will require too much changes to the
>> current object model, and I am not sure if we need to introduce such
>> changes now.
>>
>> However, this is the approach I would like to use in Artifacts. At least
>> the current version of the spec assumes that both these fields to be
>> maintained ([0])
>>
>> [0]
>> https://review.openstack.org/#/c/100968/4/specs/juno/artifact-repository.rst
>>
>> --
>> Regards,
>> Alexander Tivelkov
>>
>>
>> On Thu, Jul 3, 2014 at 3:44 AM, Scott Devoid  wrote:
>>
>>>  Hi folks,
>>>
>>> Background:
>>>
>>> Among all services, I think glance is unique in only having a single
>>> 'owner' field for each image. Most other services include a 'user_id' and a
>>> 'tenant_id' for things that are scoped this way. Glance provides a way to
>>> change this behavior by setting "owner_is_tenant" to false, which implies
>>> that owner is user_id. This works great: new images are owned by the user
>>> that created them.
>>>
>>> Why do we want this?
>>>
>>> We would like to make sure that the only person who can delete an image
>>> (besides admins) is the person who uploaded said image. This achieves that
>>> goal nicely. Images are private to the user, who may share them with other
>>> users using the image-member API.
>>>
>>> However, one problem is that we'd like to allow users to share with
>>> entire projects / tenants. Additionally, we have a number of images (~400)
>>> migrated over from a different OpenStack deployment, that are owned by the
>>> tenant and we would like to make sure that users in that tenant can see
>>> those images.
>>>
>>> Solution?
>>>
>>> I've implemented a small patch to the "is_image_visible" API call [1]
>>> which checks the image.owner and image.members against context.owner and
>>> context.tenant. This appears to work well, at least in my testing.
>>>
>>> I am wondering if this is something folks would like to see integrated?
>>> Also for glance developers, if there is a cleaner way to go about solving
>>> this problem? [2]
>>>
>>> ~ Scott
>>>
>>> [1]
>>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
>>> [2] https://review.openstack.org/104377
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-11 Thread Scott Devoid
Hi Alexander,

I read through the artifact spec. Based on my reading it does not fix this
issue at all. [1] Furthermore, I do not understand why the glance
developers are focused on adding features like artifacts or signed images
when there are significant usability problems with glance as it currently
stands. This is echoing Sean Dague's comment that bugs are filed against
glance but never addressed.

[1] See the **Sharing Artifact** section, which indicates that sharing may
only be done between projects and that the tenant owns the image.


On Thu, Jul 3, 2014 at 4:55 AM, Alexander Tivelkov 
wrote:

> Thanks Scott, that is a nice topic
>
> In theory, I would prefer to have both owner_tenant and owner_user to be
> persisted with an image, and to have a policy rule which allows to specify
> if the users of a tenant have access to images owned by or shared with
> other users of their tenant. But this will require too much changes to the
> current object model, and I am not sure if we need to introduce such
> changes now.
>
> However, this is the approach I would like to use in Artifacts. At least
> the current version of the spec assumes that both these fields to be
> maintained ([0])
>
> [0]
> https://review.openstack.org/#/c/100968/4/specs/juno/artifact-repository.rst
>
> --
> Regards,
> Alexander Tivelkov
>
>
> On Thu, Jul 3, 2014 at 3:44 AM, Scott Devoid  wrote:
>
>>  Hi folks,
>>
>> Background:
>>
>> Among all services, I think glance is unique in only having a single
>> 'owner' field for each image. Most other services include a 'user_id' and a
>> 'tenant_id' for things that are scoped this way. Glance provides a way to
>> change this behavior by setting "owner_is_tenant" to false, which implies
>> that owner is user_id. This works great: new images are owned by the user
>> that created them.
>>
>> Why do we want this?
>>
>> We would like to make sure that the only person who can delete an image
>> (besides admins) is the person who uploaded said image. This achieves that
>> goal nicely. Images are private to the user, who may share them with other
>> users using the image-member API.
>>
>> However, one problem is that we'd like to allow users to share with
>> entire projects / tenants. Additionally, we have a number of images (~400)
>> migrated over from a different OpenStack deployment, that are owned by the
>> tenant and we would like to make sure that users in that tenant can see
>> those images.
>>
>> Solution?
>>
>> I've implemented a small patch to the "is_image_visible" API call [1]
>> which checks the image.owner and image.members against context.owner and
>> context.tenant. This appears to work well, at least in my testing.
>>
>> I am wondering if this is something folks would like to see integrated?
>> Also for glance developers, if there is a cleaner way to go about solving
>> this problem? [2]
>>
>> ~ Scott
>>
>> [1]
>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
>> [2] https://review.openstack.org/104377
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changing a tenant's flavor-access after instances exist?

2014-07-09 Thread Scott Devoid
Hi Folks,

I have a situation where I previously granted a tenant access to flavor X.
At which point users launched instances based on that flavor. Now I am
removing the tenant from the flavor-access-list since I do not want users
to create any more instances using that flavor.

However, in Horizon, this results in many "Error: Unable to retrieve
instance size information." errors and users can no longer view those
instance's details: "Error: Unable to retrieve instance details for
instance ".

On the command line I can list those instances, but I cannot perform "nova
show" on them.

Is there a workaround here that would allow the instance to be visible but
prevent users from launching new instances?

If not, this is a pretty large usability hole. (Hi Devs!) :-)

Thanks,
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-02 Thread Scott Devoid
Hi folks,

Background:

Among all services, I think glance is unique in only having a single
'owner' field for each image. Most other services include a 'user_id' and a
'tenant_id' for things that are scoped this way. Glance provides a way to
change this behavior by setting "owner_is_tenant" to false, which implies
that owner is user_id. This works great: new images are owned by the user
that created them.

Why do we want this?

We would like to make sure that the only person who can delete an image
(besides admins) is the person who uploaded said image. This achieves that
goal nicely. Images are private to the user, who may share them with other
users using the image-member API.

However, one problem is that we'd like to allow users to share with entire
projects / tenants. Additionally, we have a number of images (~400)
migrated over from a different OpenStack deployment, that are owned by the
tenant and we would like to make sure that users in that tenant can see
those images.

Solution?

I've implemented a small patch to the "is_image_visible" API call [1] which
checks the image.owner and image.members against context.owner and
context.tenant. This appears to work well, at least in my testing.

I am wondering if this is something folks would like to see integrated?
Also for glance developers, if there is a cleaner way to go about solving
this problem? [2]

~ Scott

[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
[2] https://review.openstack.org/104377
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] How can I enable operation for non-admin user

2014-06-25 Thread Scott Devoid
Hi Chen,


> I’m not an experienced developer, so , could you explain more about
>  “Perhaps the live_migrate task is passing the incorrect context in for
> this database query?” ?
>
Sorry, I should have clarified that that question was for the developers
*out there*. (cc's the dev list now). I'm not really a developer either so
we will have to see what they say. ;-)


>
>
> Here is what I understand.
>
> The issue is basically caused by  @require_admin_context for
> db.service_get_by_compute_host().
>
Yes, the request is failing because @require_admin_context only checks for
the "admin" role in the context. It's somewhat of a holdover from when
there was just admin and everything else.


> Then, should this a bug ?
>

Possibly. I can see why db.service_get_by_compute_host() should be an
admin-only call, but I am assuming that there must be a way for nova to
switch the running context to itself once it has authorized the
live-migrate task.

But I suspect few people have tried to allow non-admin's to live-migrate
and this is just a bug from that.

Why “nova migrate” command do not need to check compute host ?
>

Sorry, this is a bit fastidious, but I think "nova live-migrate" is what
you mean here. "nova migrate", I think, is still a completely separate
code-path. live-migrate needs to talk to both the source and destination
nova-compute services to coordinate and confirm the migration.


>
>
>
>
> Thanks.
>
> -chen
>
>
>
> *From:* Scott Devoid [mailto:dev...@anl.gov]
> *Sent:* Thursday, June 26, 2014 9:34 AM
> *To:* Li, Chen
> *Cc:* Sushma Korati; openst...@lists.openstack.org
> *Subject:* Re: [Openstack] How can I enable operation for non-admin user
>
>
>
> Hi Li,
>
>
>
> The problem here is that db.service_get_by_compute_host() requires admin
> context. [1] The live_migrate command needs to check that both hosts have a
> running nova-compute service before it begins migration. Perhaps the
> live_migrate task is passing the incorrect context in for this database
> query? [2] I would think that conductor should be running under it's own
> context and not the caller's context? (Devs?)
>
>
>
> And before someone comments that migration should always be *admin-only*,
> I'll point out that there are legitimate reasons an operator might want to
> give someone migrate permissions and not all admin permissions.
>
>
>
> ~ Scott
>
>
>
> [1]
> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L485
>
> [2]
> https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L87
>
>
>
> On Tue, Jun 24, 2014 at 9:11 PM, Li, Chen  wrote:
>
> Hi Sushma,
>
>
>
> Thanks for the reply.
>
>
>
> Well, edit /etc/nova/policy.json do works for command “nova migrate”.
>
>
>
> But when I run command “nova live-migration”, I still get errors, in
>  /var/log/nova/conductor.log:
>
>
>
>
>
> 2014-06-25 02:07:23.897 115385 INFO oslo.messaging._drivers.impl_qpid [-]
> Connected to AMQP server on 192.168.40.122:5672
>
> 2014-06-25 02:08:59.221 115395 ERROR nova.conductor.manager
> [req-63f0a004-ef69-47f4-aefb-e0fa194d99b9 fa970646fa92442fa14b2b759cf381a6
> 2eb6bd3a69ad454a90489dd12b9cdf3b] Migration of instance
> 446d96d7-2073-46ac-b40c-0f167fbd04b2 to host None unexpectedly failed.
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager Traceback
> (most recent call last):
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
> "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 757, in
> _live_migrate
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager
> block_migration, disk_over_commit)
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
> "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py",
> line 191, in execute
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager return
> task.execute()
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
> "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py",
> line 56, in execute
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager
> self._check_host_is_up(self.source)
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
> "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py",
> line 87, in _check_host_is_up
>
> 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager service =
> db.service_get_by_compute_host(self.context, host)
>
> 2014-06-25 02:08:59.221 115395 T

[openstack-dev] Fwd: [Openstack] Glance - and the use of the "project_id:%(project_id)" rule

2014-06-25 Thread Scott Devoid
?

-- Forwarded message --
From: Michael Hearn 
Date: Fri, May 2, 2014 at 9:21 AM
Subject: [Openstack] Glance - and the use of the "project_id:%(project_id)"
rule
To: "openst...@lists.openstack.org" 


Having played with the policies and rules within glance's policy.json file
I have not had any success using the rule, "project_id:%(project_id)" to
restrict api usage.
Without changing user/role/tenant  I have had success using
project_id:%(project_id)" with cinder.
I cannot find anything to suggest glance's policy engine cannot parse the
rule but would like confirmation.
Can anyone verify this?.

This is using icehouse, glance 0.12.0

~Mike



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Driver] Delete snapshot

2014-06-19 Thread Scott Devoid
I agree with Amit on this. There needs to be a way for the driver to
indicate that an operation is not currently possible and include some
descriptive message to indicate why. Right now the volume manager assumes
certain behavioral constraints (e.g. that snapshots are completely
decoupled from clones) when this behavior is actually determined by the
underlying driver.

~ Scott


On Wed, Jun 18, 2014 at 6:29 PM, Mike Perez  wrote:

> On 10:20 Wed 18 Jun , Amit Das wrote:
> > Implementation issues - If Cinder driver throws an Exception the snapshot
> > will have error_deleting status & will not be usable. If Cinder driver
> logs
> > the error silently then Openstack will probably mark the snapshot as
> > deleted.
> >
> > What is the appropriate procedure that needs to be followed for above
> > usecase.
>
> I'm not sure what "Openstack will probably mark the snapshot as deleted"
> means.
> If a snapshot gets marked with error_deleting, we don't know what state the
> snapshot is in because it could've been a delete that partially finished.
> You
> should leave the cinder volume manager to handle this. It's up to the
> driver to
> say the delete finished or failed, that's it.
>
> --
> Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Scott Devoid
>
> Not only live upgrades but also dynamic reconfiguration.
>
> Overcommitting affects the quality of service delivered to the cloud user.
>  In this situation in particular, as in many situations in general, I think
> we want to enable the service provider to offer multiple qualities of
> service.  That is, enable the cloud provider to offer a selectable level of
> overcommit.  A given instance would be placed in a pool that is dedicated
> to the relevant level of overcommit (or, possibly, a better pool if the
> selected one is currently full).  Ideally the pool sizes would be dynamic.
>  That's the dynamic reconfiguration I mentioned preparing for.
>

+1 This is exactly the situation I'm in as an operator. You can do
different levels of overcommit with host-aggregates and different flavors,
but this has several drawbacks:

   1. The nature of this is *slightly* exposed to the end-user, through
   extra-specs and the fact that two flavors cannot have the same name. One
   scenario we have is that we want to be able to document our flavor
   names--what each name means, but we want to provide different QoS standards
   for different projects. Since flavor names must be unique, we have to
   create different flavors for different levels of service. *Sometimes you
   do want to lie to your users!*
   2. If I have two pools of nova-compute HVs with different overcommit
   settings, I have to manage the pool sizes manually. Even if I use puppet to
   change the config and flip an instance into a different pool, that requires
   me to restart nova-compute. Not an ideal situation.
   3. If I want to do anything complicated, like 3 overcommit tiers with
   "good", "better", "best" performance and allow the scheduler to pick
   "better" for a "good" instance if the "good" pool is full, this is very
   hard and complicated to do with the current system.


I'm looking forward to seeing this in nova-specs!
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread Scott Devoid
>
> It may be useful to have an API query which tells you all the numbers you
> may need - real hardware values, values after using the configured
> overcommit ratios and currently used values.
>

+1 to an exposed admin-API for host resource state and calculations,
especially if this allowed you to dynamically change the ratios.


On Tue, Jun 3, 2014 at 10:20 AM, Jesse Pretorius 
wrote:

> On 3 June 2014 15:29, Jay Pipes  wrote:
>
>> Move CPU and RAM allocation ratio definition out of the Nova scheduler
>> and into the resource tracker. Remove the calculations for overcommit out
>> of the core_filter and ram_filter scheduler pieces.
>
>
> Makes sense to me.
>
> I especially like the idea of being able to have different allocation
> ratios for host aggregates.
>
> It may be useful to have an API query which tells you all the numbers you
> may need - real hardware values, values after using the configured
> overcommit ratios and currently used values.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-05-27 Thread Scott Devoid
Also I would prefer that we not add "special" tenant names. Roles already
had/has problems with "admin", "Member" and "_member_" having special
meaning in some projects.

~ Scott


On Tue, May 27, 2014 at 1:20 PM, Vishvananda Ishaya
wrote:

> Are you aware that there is already a way to do this through the cli using
> quota-class-update?
>
> http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html(near 
> the bottom)
>
> Are you suggesting that we also add the ability to use just regular
> quota-update? I’m not sure i see the need for both.
>
> Vish
>
> On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J <
> sergio.j.cazzol...@intel.com> wrote:
>
> > I would to hear your thoughts about an idea to add a way to manage the
> default quota values through the API.
> >
> > The idea is to use the current quota api, but sending ''default' instead
> of the tenant_id. This change would apply to quota-show and quota-update
> methods.
> >
> > This approach will help to simplify the implementation of another
> blueprint named per-flavor-quotas
> >
> > Feedback? Suggestions?
> >
> >
> > Sergio Juan Cazzolato
> > Intel Software Argentina
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] No meeting tomorrow

2014-04-24 Thread Scott Devoid
I would run the meeting if I knew how to. :-)
And isn't next week the "recommended off week"?
~ Scott


On Thu, Apr 24, 2014 at 2:05 AM, Michael Still  wrote:

> Hi.
>
> Given no one has volunteered to run the meeting and I can't make it
> because of travel, let's skip this weeks meeting. We'll have one next
> week for sure!
>
> Michael
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking for experienced guide to understand libvirt driver

2014-04-21 Thread Scott Devoid
Hi folks!

I am working to add Sheepdog as a disk backend for the libvirt driver. I
have a blueprint started and an early version of the code. However I am
having trouble working my way thorough the code in the libvirt driver. The
storage code doesn't feel vary modular to start with and my changes only
seem to make it worse; e.g. adding more if blocks to 400 line functions.

Is there an experienced contributor that could spend 30 minutes walking
through parts of the code?

- Blueprint: https://review.openstack.org/#/c/82584/
- Nova code: https://review.openstack.org/#/c/74148/
- Devstack code: https://review.openstack.org/#/c/89434/

Thanks,
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Scott Devoid
On Fri, Apr 18, 2014 at 11:41 AM, Dean Troyer  wrote:

> On Fri, Apr 18, 2014 at 10:51 AM, Scott Devoid  wrote:
>
>> The issue is that it is very easy to suggest new features and refactoring
>> when you are very familiar with the codebase. To a newcomer, though, you
>> are basically asking me to do something that is impossible, so the logical
>> interpretation is you're telling me to "go away".
>>
>
> This patch was dropped on us twice without any conversation or warning or
> discussion about how might be the best approach. Suggestions _were_ made
> and subsequently ignored.  If there was a lack of understanding, asking
> questions is a good way to get past that.  None were asked.  I do not view
> that as saying 'go away'.
>

I was speaking more from personal experience with other patches, where the
response is "oh this is great, but we really need XYZ to happen first."
Nobody is working on XYZ and I sure don't know how to make it happen. But
yea, my patch is -2 blocked on that. :-/

DevStack is an _opinionated_ OpenStack installer.  It can not and will not
> be all things to all people.  The first priority is to address services
> required by OpenStack projects (database, queue, web server, etc) and even
> then we only use what is provided in the underlying distributions.  (BTW,
> does zmq even still work?  I don't think it is tested.)
>
> Layered products that require 3rd party repos have a higher bar to get
> over to be included in the DevStack repo.  If an OpenStack project changes
> to require such a product, and that change gets through the TC (see MongoDB
> discussions for an example), then we'll have to re-evaluate that position.
>
> All this said, I really do want to see Ceph support for Cinder, Glance,
> Swift, etc in DevStack as I think it is cool and useful.  But it is not
> required to be in the DevStack repo to be useful.
>

I guess the question then is how we can gate with functional tests for
drivers without touching devstack?

~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Scott Devoid
On Fri, Apr 18, 2014 at 5:32 AM, Sean Dague  wrote:

> On 04/18/2014 12:03 AM, Scott Devoid wrote:
> > So I have had a chance to look over the whole review history again. I
> > agree with Sean Dague and Dean Troyer's concerns that the current patch
> > affects code outside of lib/storage and extras.d. We should make the
> > Devstack extension system more flexible to allow for more extensions.
> > Although I am not sure if this responsibility falls completely in the
> > lap of those wishing to integrate Ceph.
>
> Where should it fall? This has been pretty common with trying to bring
> in anything major, the general plumbing needs to come from that same
> effort. It's also a pretty sane litmus test on whether this is a drive
> by contribution that will get no support in the future (and thus just
> expect Dean and I to go fix things), or something which will have
> someone actively contributing to keep things working in the future.
>

The issue is that it is very easy to suggest new features and refactoring
when you are very familiar with the codebase. To a newcomer, though, you
are basically asking me to do something that is impossible, so the logical
interpretation is you're telling me to "go away".


>
> > What is more concerning though is the argument that /even when the Ceph
> > patch meets these standards/ /it will still have to be pulled in from
> > some external source. /Devstack is a central part of OpenStack's test
> > and development system. Core projects depend upon it to develop and test
> > drivers. As an operator, I use it to understand how changes might affect
> > my production system. Documentation. Bug Triage. Outreach. Each of these
> > tasks and efforts benefit from having a curated and maintained set
> > extras in the mainline codebase. Particularly extras that are already
> > represented by mainline drivers in other projects.
>
> My concern is that there is a lot of code in devstack. And every time I
> play with a different set of options we don't enable in the gate, things
> get brittle. For instance, Fedora support gets broken all the time,
> because it's not tested in the gate.
>
> Something as big as using ceph for storage back end across a range of
> services is big. And while there have been patches, I've yet to see
> anyone volunteer 3rd party testing here to help us keep it working. Or
> the long term commitment of being part of the devstack community
> reviewing patches and fixing other bugs, so there is some confidence
> that if people try to use this it works.
>

100% agree. I was under the impression that integration of the ceph patches
into devstack was a precursor to a 3rd party gate on ceph functionality. We
have some VM resources to contribute to 3rd party tests, but I would need
assistance in setting that up.


> Some of the late reverts in nova for icehouse hit this same kind of
> issue, where once certain rbd paths were lit in the code base within
> 24hrs we had user reports coming back of things exploding. That makes me
> feel like there are a lot of daemons lurking here, and if this is going
> to be a devstack mode, and that people are going to use a lot, then it
> needs to be something that's tested.
>
> If the user is pulling the devstack plugin from a 3rd party location,
> then it's clear where the support needs to come from. If it's coming
> from devstack, people are going to be private message pinging me on IRC
> when it doesn't work (which happens all the time).
>

I see your motivations here. There are systems to help us with this though:
redirect them to ask.openstack.org or bugs.launchpad.net and have them ping
you with the link. Delegate replies to others. I try to answer any
questions that pop up on #openstack but I need to look at the
ask.openstack.org queue more often. Perhaps we need to put more focus on
organizing community support and offloading that task from PTLs and core
devs.


> That being said, there are 2 devstack sessions available at design
> summit. So proposing something around addressing the ceph situation
> might be a good one. It's a big and interesting problem.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-17 Thread Scott Devoid
So I have had a chance to look over the whole review history again. I agree
with Sean Dague and Dean Troyer's concerns that the current patch affects
code outside of lib/storage and extras.d. We should make the Devstack
extension system more flexible to allow for more extensions. Although I am
not sure if this responsibility falls completely in the lap of those
wishing to integrate Ceph.

What is more concerning though is the argument that *even when the Ceph
patch meets these standards* *it will still have to be pulled in from some
external source. *Devstack is a central part of OpenStack's test and
development system. Core projects depend upon it to develop and test
drivers. As an operator, I use it to understand how changes might affect my
production system. Documentation. Bug Triage. Outreach. Each of these tasks
and efforts benefit from having a curated and maintained set extras in the
mainline codebase. Particularly extras that are already represented by
mainline drivers in other projects.

I am hopeful that def core() will help in creating a logical framework for
these considerations.

~ Scott





On Fri, Apr 4, 2014 at 4:25 AM, Chmouel Boudjnah wrote:

> Hello,
>
> We had quite a lengthy discussion on this review :
>
> https://review.openstack.org/#/c/65113/
>
> about a patch that seb has sent to add ceph support to devstack.
>
> The main issues seems to resolve around the fact that in devstack we
> support only packages that are in the distros and not having to add
> external apt sources for that.
>
> In devstack we are moving as well toward a nice and solid plugin system
> where people can hook externally and not needing to submit patch to add
> a feature that change the 'core' of devstack.
>
> I think the best way to go forward with this would be to :
>
> * Split the patch mentioned above to get the generic things bit in
> their own patch. i.e the storage file :
>
> https://review.openstack.org/#/c/65113/19/lib/storage
>
> and the create_disk (which would need to be used by lib/swift as well) :
>
> https://review.openstack.org/#/c/65113/19/functions
>
> * Get the existing drivers converted to that new storage format.
>
> * Adding new hooks to the plugin system to be able to do what we want
> for this:
>
> https://review.openstack.org/#/c/65113/19/lib/cinder
>
> and for injecting things in libvirt :
>
> https://review.openstack.org/#/c/65113/19/lib/nova
>
> Hopefully to have folks using devstack and ceph would just need to be :
>
> $ git clone devstack
> $ curl -O lib/storages/ceph http:///ceph_devstack
> (and maybe an another file for extras.d)
>
> am I missing a step ?
>
> Cheers,
> Chmouel.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quotas: per-flavor-quotas

2014-04-16 Thread Scott Devoid
 Sergio J Cazzolato wrote:

> I would to see the operators opinion in this blueprint, we need to
> understand if it is useful or it is confusing for you.
>
> https://review.openstack.org/#/c/84432/9


Sergio, I'm reposting this in a new thread since this isn't about quota
templates. Also I'm posting it to both operators and the development list.
I think we need feedback from both.

Hopefully we can get some discussion here on:
1. In what ways does the current quota system not work for you? (Operations)
2. Are there other ways to improve / change the quota system? And do these
address #1?

My hope is that we can make some small improvements that have the
possibility of landing in the Juno phase.

As clarification for anyone reading the above blueprint, this came out of
the operators summit and a thread on the operators mailing list [1]. This
blueprint defines quotas on the number of a particular flavor that a user
or project may have, e.g. "3 m1.medium and 1 m1.large instances please".
The operational need for such quotas is discussed in the mailing list.

There is another interpretation of "per-flavor-quotas", which would track
the existing resources (CPUs, RAM, etc) but do it on a per-flavor basis. As
far as I know, there is no blueprint for this, but it was suggested in the
review and on IRC. For clarity, we could call this proposal "quota
resources per flavor".

There's also a blueprint for extensible resource tracking (which I think is
part of the quota system), which has some interesting ideas. It is more
focused on closing the gap between flavor extra-specs and resource usage /
quotas. [2]

Thank you,
~ Scott

[1]
http://lists.openstack.org/pipermail/openstack-operators/2014-April/004274.html
[2] Extensible Resource Tracking *https://review.openstack.org/#/c/86050/
*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Scott Devoid
Adding the Operators list to this since I think they will have some useful
comments.

My experience is that the current Nova quotas are not entirely useful. In
our environment we have a limited number of machines with 32 cores and 1TB
of ram (tens), and a large number with 8 cores and 32GB of ram (hundreds).
Aside from limits on the # of instances, the quota system would see the use
of 32 small machines as equivalent to the use of one big machine.
Economically and operationally these to cases are very different.

As a suggestion, how hard would it be to allow operators to create quotas
on the # of a given flavor that a tenant/domain may want to use?


On Thu, Apr 3, 2014 at 10:02 AM, Cazzolato, Sergio J <
sergio.j.cazzol...@intel.com> wrote:

>  Hi All,
>
>
>
> I’d like to know your thoughts regarding Quota Management… I’ve been
> contributing to this topic for icehouse and noticed some issues and
> discussions around its implementation like code is duplicated, synch
> problems with database, not having an homogeneous logic, etc… so I was
> thinking that maybe a centralized implementation could be a solution for
> this… As far as I know there was a discussion during the last summit and
> the decision was to use Keystone for a Centralized Quota Management
> solution but I don’t have the details on that discussion… Also I was
> looking at Boson (https://wiki.openstack.org/wiki/Boson) that seems to be
> a nice solution for this and also addresses the scenario where Nova is
> deployed in a multi-cell manner and some other interesting things.
>
>
>
> Sergio
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-01-28 Thread Scott Devoid
Is it possible to include a link to the original LP Answers page as a
comment on the question? Or are the LP Answers sections getting wiped
completely after the move?
Also perhaps all imported questions should be tagged "lp-answers" or
something? This would help manual curators to vote and further clean up
questions.

Otherwise it looks quite good. Thanks for the work!

~ Scott


On Tue, Jan 28, 2014 at 6:38 PM, Stefano Maffulli wrote:

> Hello folks
>
> we're almost ready to import all questions and asnwers from LP Answers
> into Ask OpenStack.  You can see the result of the import from Nova on
> the staging server http://ask-staging.openstack.org/
>
> There are some formatting issues for the imported questions and I'm
> trying to evaluate how bad these are.  The questions I see are mostly
> readable and definitely pop up in search results, with their answers so
> they are valuable already as is. Some parts, especially the logs, may
> not look as good though. Fixing the parsers and get a better rendering
> for all imported questions would take an extra 3-5 days of work (maybe
> more) and I'm not sure it's worth it.
>
> Please go ahead and browse the staging site and let me know what you think.
>
> Cheers,
> stef
>
> --
> Ask and answer questions on https://ask.openstack.org
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Scott Devoid
> A big part of my interest here is to make INFO a useful informational
> level for operators. That means getting a bunch of messages out of it
> that don't belong.


+1 to that! How should I open / tag bugs for this?

We should be logging user / tenant on every wsgi request, so that should
> be parsable out of INFO. If not, we should figure out what is falling
> down there.
>

At the moment we're not automatically parsing logs (just collecting via
syslog and logstash).

Follow on question: do you primarily use the EC2 or OSAPI? As there are
> some current short comings on the EC2 logging, and figuring out
> normalizing those would be good as well.


Most of our users work through Horizon or the nova CLI. Good to know about
the EC2 issues though.


On Tue, Jan 28, 2014 at 1:46 PM, Sean Dague  wrote:

> On 01/28/2014 12:41 PM, Scott Devoid wrote:
> > For the uses I've seen of it in the nova api code INFO would be
> > perfectly fine in place of AUDIT.
> >
> >
> > We've found the AUDIT logs in nova useful for tracking which user
> > initiated a particular request (e.g. delete this instance). AUDIT had a
> > much better signal to noise ratio than INFO or DEBUG. Although this
> > seems to have changed since Essex. For example nova-compute spits out
> > "AUDIT nova.compute.resource_tracker" messages every minute even if
> > there are no changes :-/
>
> A big part of my interest here is to make INFO a useful informational
> level for operators. That means getting a bunch of messages out of it
> that don't belong.
>
> We should be logging user / tenant on every wsgi request, so that should
> be parsable out of INFO. If not, we should figure out what is falling
> down there.
>
> Follow on question: do you primarily use the EC2 or OSAPI? As there are
> some current short comings on the EC2 logging, and figuring out
> normalizing those would be good as well.
>
> -Sean
>
> >
> > ~ Scott
> >
> >
> > On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews
> > mailto:everett.to...@rackspace.com>>
> wrote:
> >
> > Hi Sean,
> >
> > Could 1.1.1 "Every Inbound WSGI request should be logged Exactly
> > Once" be used to track API call data in order to discover which API
> > calls are being made most frequently?
> >
> > It certainly seems like it could but I want to confirm. I ask
> > because this came up as B "Get aggregate API call data from
> > companies willing to share it." in the user survey discussion [1].
> >
> > Thanks,
> > Everett
> >
> > [1]
> >
> http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html
> >
> >
> > On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:
> >
> > > Back at the beginning of the cycle, I pushed for the idea of doing
> > some
> > > log harmonization, so that the OpenStack logs, across services,
> made
> > > sense. I've pushed a proposed changes to Nova and Keystone over
> > the past
> > > couple of days.
> > >
> > > This is going to be a long process, so right now I want to just
> > focus on
> > > making INFO level sane, because as someone that spends a lot of
> time
> > > staring at logs in test failures, I can tell you it currently
> isn't.
> > >
> > > https://wiki.openstack.org/wiki/LoggingStandards is a few things
> I've
> > > written down so far, comments welcomed.
> > >
> > > We kind of need to solve this set of recommendations once and for
> > all up
> > > front, because negotiating each change, with each project, isn't
> going
> > > to work (e.g - https://review.openstack.org/#/c/69218/)
> > >
> > > What I'd like to find out now:
> > >
> > > 1) who's interested in this topic?
> > > 2) who's interested in helping flesh out the guidelines for
> > various log
> > > levels?
> > > 3) who's interested in helping get these kinds of patches into
> various
> > > projects in OpenStack?
> > > 4) which projects are interested in participating (i.e. interested
> in
> > > prioritizing landing these kinds of UX improvements)
> > >
> > > This is going to be progressive and iterative. And will require
> > lots of
> > > folks involved.
> > >
> > >   -Sean
> > >
> &

Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Scott Devoid
>
> For the uses I've seen of it in the nova api code INFO would be perfectly
> fine in place of AUDIT.
>

We've found the AUDIT logs in nova useful for tracking which user initiated
a particular request (e.g. delete this instance). AUDIT had a much better
signal to noise ratio than INFO or DEBUG. Although this seems to have
changed since Essex. For example nova-compute spits out
"AUDIT nova.compute.resource_tracker" messages every minute even if there
are no changes :-/

~ Scott


On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews  wrote:

> Hi Sean,
>
> Could 1.1.1 "Every Inbound WSGI request should be logged Exactly Once" be
> used to track API call data in order to discover which API calls are being
> made most frequently?
>
> It certainly seems like it could but I want to confirm. I ask because this
> came up as B "Get aggregate API call data from companies willing to share
> it." in the user survey discussion [1].
>
> Thanks,
> Everett
>
> [1]
> http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html
>
>
> On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:
>
> > Back at the beginning of the cycle, I pushed for the idea of doing some
> > log harmonization, so that the OpenStack logs, across services, made
> > sense. I've pushed a proposed changes to Nova and Keystone over the past
> > couple of days.
> >
> > This is going to be a long process, so right now I want to just focus on
> > making INFO level sane, because as someone that spends a lot of time
> > staring at logs in test failures, I can tell you it currently isn't.
> >
> > https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
> > written down so far, comments welcomed.
> >
> > We kind of need to solve this set of recommendations once and for all up
> > front, because negotiating each change, with each project, isn't going
> > to work (e.g - https://review.openstack.org/#/c/69218/)
> >
> > What I'd like to find out now:
> >
> > 1) who's interested in this topic?
> > 2) who's interested in helping flesh out the guidelines for various log
> > levels?
> > 3) who's interested in helping get these kinds of patches into various
> > projects in OpenStack?
> > 4) which projects are interested in participating (i.e. interested in
> > prioritizing landing these kinds of UX improvements)
> >
> > This is going to be progressive and iterative. And will require lots of
> > folks involved.
> >
> >   -Sean
> >
> > --
> > Sean Dague
> > Samsung Research America
> > s...@dague.net / sean.da...@samsung.com
> > http://dague.net
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why don't we deal with "claims" when live migrating an instance?

2014-01-16 Thread Scott Devoid
Related question: Why does resize get called (and the VM put in "RESIZE
VERIFY" state) when migrating from one machine to another, keeping the same
flavor?


On Thu, Jan 16, 2014 at 9:54 AM, Brian Elliott  wrote:

>
> On Jan 15, 2014, at 4:34 PM, Clint Byrum  wrote:
>
> > Hi Chris. Your thread may have gone unnoticed as it lacked the Nova tag.
> > I've added it to the subject of this reply... that might attract them.
>  :)
> >
> > Excerpts from Chris Friesen's message of 2014-01-15 12:32:36 -0800:
> >> When we create a new instance via _build_instance() or
> >> _build_and_run_instance(), in both cases we call instance_claim() to
> >> reserve and test for resources.
> >>
> >> During a cold migration I see us calling prep_resize() which calls
> >> resize_claim().
> >>
> >> How come we don't need to do something like this when we live migrate an
> >> instance?  Do we track the hypervisor overhead somewhere in the
> instance?
> >>
> >> Chris
> >>
>
> It is a good point and it should be done.  It is effectively a bug.
>
> Brian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to newer full projects from what used to be part of nova

2013-11-02 Thread Scott Devoid
>
> Migrations from Essex to Grizzly/Havana

...

I would find it entirely suitable to upgrade from Essex to Folsom, then
> migrate from nova-volume to cinder and from nova-network to quantum, then
> only to upgrade to Grizzly.


We're in the same spot, upgrading an Essex deployment to Havana. We decided
to forego an incremental upgrade of nova, glance and keystone since it was
not clear that we could perform that upgrade without a major service
disruption to active VMs. Additionally we had no good way of fully testing
the upgrade beforehand.

However, we have a nova-volume service hosting ~ 500TB of volume data (>
200 volumes) using the ZFS driver. We'd like to be able to "carry" these
volumes over the upgrade to cinder. Our current strategy is to deploy
cinder and nova-volume on the same machine with separate ZFS pools. When a
user's ready to upgrade a volume, they hit a button; a script fires off 1)
creating a new cinder volume, 2) rename the nova-volume in ZFS to the new
cinder one and 3) deleting the old nova-volume record.

That's the plan, at least.

~ Scott


On Fri, Nov 1, 2013 at 3:16 PM, Dean Troyer  wrote:

> On Fri, Nov 1, 2013 at 1:38 PM, Devananda van der Veen <
> devananda@gmail.com> wrote:
>
>> Actually, anyone deploying nova with the "baremetal" driver will face a
>> similar split when Ironic is included in the release. I'm targeting
>> Icehouse, but of course, it's up to the TC when Ironic graduates.
>>
>> This should have a smaller impact than either the neutron or cinder
>> splits, both of which were in widespread use, but I expect we'll see more
>> usage of nova-baremetal crop up now that Havana is released.
>>
>
> I didn't recall in which release baremetal was first a supported option,
> is it only now in Havana?  Is it clear in the docs that this sort of
> situation is coming in the next release or two? (And no, I haven't gone to
> look for myself, maybe on the plane tomorrow...)
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups with OVS instead of iptables?

2013-09-03 Thread Scott Devoid
+1 for an answer to this.

The reference documentation suggests running Neutron OVS with a total of 6
software switches between the VM and public NAT addresses. [1]
What are the performances differences folks see with this configuration vs.
the 2 software switch configuration for linux bridge?

[1]
http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178


On Tue, Sep 3, 2013 at 8:34 AM, Lorin Hochstein wrote:

> (Also asked at
> https://ask.openstack.org/en/question/4718/security-groups-with-ovs-instead-of-iptables/
> )
>
> The only security group implementations in neutron seem to be
> iptables-based. Is it technically possible to implement security groups
> using openvswitch flow rules, instead of iptables rules?
>
> It seems like this would cut down on the complexity associated with the
> current OVSHybridIptablesFirewallDriver implementation, where we need to
> create an extra linux bridge and veth pair to work around the
> iptables-openvswitch issues. (This also breaks if the user happens to
> install the openvswitch brcompat module).
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-09 Thread Scott Devoid
Hi Nikolay and Patrick, thanks for your replies.

Virtual vs. Physical Resources
Ok, now I realize what you meant by "virtual resources," e.g. instances,
volumes, networks...resources provided by existing OpenStack schedulers. In
this case "physical resources" are actually more "removed" since there are
no interfaces to them in the user-level OpenStack APIs. If you make a
physical reservation on "this rack of machines right here", how do you
supply this reservation information to nova-scheduler? Probably via
scheduler hints + an availability zone or host-aggregates. At which point
you're really defining a instance reservation that includes explicit
scheduler hints. Am I missing something?

Eviction:
Nikolay, to your point that we might evict something that was already paid
for: in the design I have in mind, this would only happen if the policies
set up by the operator caused one reservation to be weighted higher than
another reservation. Maybe because one client paid more? The point is that
this would be configurable and the sensible default is to not evict
anything.


On Fri, Aug 9, 2013 at 8:05 AM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> Hello, Patrick!
>
> We have several reasons to think that for the virtual resources this
> possibility is interesting. If we speak about physical resources, user may
> use them in the different ways, that's why it is impossible to include base
> actions with them to the reservation service. But speaking about virtual
> reservations, let's imagine user wants to reserve virtual machine. He knows
> everything about it - its parameters, flavor and time to be leased for.
> Really, in this case user wants to have already working (or at least
> starting to work) reserved virtual machine and it would be great to include
> this opportunity to the reservation service. We are thinking about base
> actions for the virtual reservations that will be supported by Climate,
> like boot/delete for instance, create/delete for volume and create/delete
> for the stacks. The same will be with volumes, IPs, etc. As for more
> complicated behaviour, it may be implemented in Heat. This will make
> reservations simpler to use for the end users.
>
> Don't you think so?
>
> P.S. Also we remember about the problem you mentioned some letters ago -
> how to guarantee that user will have already working and prepared host / VM
> / stack / etc. by the time lease actually starts, no just "lease begins and
> preparing process begins too". We are working on it now.
>
>
> On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit wrote:
>
>>  Hi Nikolay,
>>
>> Relying on Heat for orchestration is obviously the right thing to do. But
>> there is still something in your design approach that I am having
>> difficulties to comprehend since the beginning. Why do you keep thinking
>> that orchestration and reservation should be treated together? That's
>> adding unnecessary complexity IMHO. I just don't get it. Wouldn't it be
>> much simpler and sufficient to say that there are pools of reserved
>> resources you create through the reservation service. Those pools could be
>> of different types i.e. host, instance, volume, network,.., whatever if
>> that's really needed. Those pools are identified by a unique id that you
>> pass along when the resource is created. That's it. You know, the AWS
>> reservation service doesn't even care about referencing a reservation when
>> an instance is created. The association between the two just happens behind
>> the scene. That would work in all scenarios, manual, automatic, whatever...
>> So, why do you care so much about this in a first place?
>> Thanks,
>> Patrick
>>
>> On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:
>>
>>  Patrick, responding to your comments:
>>
>>  1) Dina mentioned "start automatically" and "start manually" only as
>> examples of how these politics may look like. It doesn't seem to be a
>> correct approach to put orchestration functionality (that belongs to Heat)
>> in Climate. That's why now we can implement the basics like starting Heat
>> stack, and for more complex actions we may later utilize something like
>> Convection (Task-as-a-Service) project.
>>
>>
>>  2) If we agree that Heat is the main consumer of
>> Reservation-as-a-Service, we can agree that lease may be created according
>> to one of the following scenarions (but not multiple):
>> - a Heat stack (with requirements to stack's contents) as a resource to
>> be reserved
>> - some amount of physical hosts (random ones or filtered based on certain
>> characteristics).
>> - some amount of individual VMs OR Volumes OR IPs
>>
>>  3) Heat might be the main consumer of virtual reservations. If not,
>> Heat will require development efforts in order to support:
>> - reservation of a stack
>> - waking up a reserved stack
>> - performing all the usual orchestration work
>>
>>  We will support reservation of individual instance/volume/ IP etc, but
>> the use case with "giving user already workin

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-06 Thread Scott Devoid
Some thoughts:

0. Should Climate also address the need for an eviction service? That is, a
service that can weight incoming requests and existing resource allocations
using some set of policies and evict an existing resource allocations to
make room for the higher weighted request. Eviction is necessary if you
want to implement a Spot-like service. And if you want Climate reservations
that do not tie physical resources to the reservation, this is also
required to ensure that requests against the reservation succeed. (Note
that even if you do tie physical resources as in whole-host reservations,
an eviction service can help when physical resources fail.)

1. +1 Let end users continue to use existing APIs for resources and extend
those interfaces with reservation attributes. Climate should only handle
reservation crud and tracking.

2a. As an operator, I want the power to define reservations in terms of
host capacity / flavor, min duration, max duration... and limit what kind
of reservation requests can come in. Basically define "reservation flavors"
and let users submit requests as instances of one "reservation flavor". If
you let the end user define all of these parameters I will be rejecting a
lot of reservation requests.

2b. What's the point of an "immediate lease"? This should be equivalent to
making the request against Nova directly right? Perhaps there's a rational
for this w.r.t. billing? Otherwise I'm not sure what utility this kind of
reservation provides?

2c. Automatic vs. manual reservation approval:

What a user wants to know is whether a reservation can be granted in a
> all-or-nothing manner at the time he is asking the lease.


This is a very hard problem to solve: you have to model resource
availability (MTTF, MTBF), resource demand (how full are we going to be),
and bake in explicit policies (this tenant gets priority) to automatically
grant / deny such reservations. Having reservations go through a manual
request -> operator approval system is extremely simple and allows
operators to tackle the automated case as they need to.

All I need is a tool that lets a tenant spawn a single critical instance
even when another tenant is running an application that's constantly trying
to grab as many instances as it can get.

3. This will add a lot of complexity, particularly if you want to tackle #0.

5. (NEW) Note that Amazon's reserved instances feature doesn't tie
reservations against specific instances. Effectively you purchase discount
coupons to be applied at the end of the billing cycle. I am not sure how
Amazon handles tenants with multiple reservations at different utilization
levels (prioritize heavy -> light?).

~ Scott


On Tue, Aug 6, 2013 at 6:12 AM, Patrick Petit wrote:

>  Hi Dina and All,
> Please see comments inline. We can  drill down on the specifics off-line
> if that's more practical.
> Thanks in advance,
> Patrick
>
> On 8/5/13 3:19 PM, Dina Belova wrote:
>
>  Hello, everyone!
>
>
>  Patrick, Julien, thank you so much for your comments. As for the moments
> Patrick mentioned in his letter, I'll describe our vision for them below.
>
>
>  1) Patrick, thank you for the idea! I think it would be great to add not
> only 'post-lease actions policy', but also 'start-lease actions policy'. I
> mean like having two types of what can be done with resource (virtual one)
> on lease starting - 'start VM automatically' or 'start VM manually'. This
> means user may not use reserved resources at all, if he needs such a
> behaviour.
>
> Something along those lines would work but I think the 'start VM manually'
> keeps over specifying the behavior IMO since you still make the assumption
> that reserved resources are always started using a term 'manually' that is
> misleading because if not automatically started by the reservation service
> they can still be automatically started elsewhere like in Heat. I general I
> agree that users can take advantage of being able to specify pre and post
> lease actions / conditions although it shouldn't be prescriptive of
> something binary like start automatically or manually. Another beneficial
> usage could be to send parametrized notifications. I would also make the
> pre and post action optional so that if the user choose not to associate an
> action with the realization of a lease, he doesn't have to specify
> anything. Finally, I would also  that the specification of a pre and post
> action is assorted of a time offset to take into account the lead time to
> provision certain types of resources like physical hosts. That's a possible
> solution to point 4.
>
>
>  2) We really believe that creating lease first, and going with its id to
> all the OpenStack projects to use is a better idea than 'filling' the lease
> with resources just at the moment of its creation. I'll try to explain why.
> First of all, as for virtual reservations, we'll need to proxy Nova,
> Cinder, etc. APIs through Reservation API to reserve VM or volume or
> something else. Workfl

Re: [openstack-dev] [Cinder] Snapshot List support

2013-08-03 Thread Scott Devoid
The only snapshot functions in the volume driver are create_snapshot,
delete_shapshot and create_volume_from_snapshot. That row should probably
be deleted from the wiki since listing snapshots occurs entirely via the
db/api.
I've added the current set of supported features for the Solaris ISCSI
driver to the wiki.

Hijacking your thread for a moment: I would think that a
"revert_volume_to_snapshot" function would be useful. Is this implemented
via create_volume_from_snapshot, i.e. snapshot.volume == volume in the
arguments? Or does this functionality not exist?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev