Re: [openstack-dev] [Glance] Concurrent update issue in Glance v2 API

2014-09-25 Thread Mark Washenberger
Thanks for diving on this grenade, Alex!

FWIW, I agree with all of your assessments. Just in case I am mistaken, I
summarize them as smaller updates  logical clocks  wall clocks (due to
imprecision and skew).

Given the small size of your patch [4], I'd say lets try to land that. It
is nicer to solve this problem with software rather than with db schema if
that is possible.

On Thu, Sep 25, 2014 at 9:21 AM, Alexander Tivelkov ativel...@mirantis.com
wrote:

 Hi folks!

 There is a serious issue [0] in the v2 API of Glance which may lead to
 race conditions during the concurrent updates of Images' metadata.
 It can be fixed in a number of ways, but we need to have some solution
 soon, as we are approaching rc1 release, and the race in image updates
 looks like a serious problem which has to be fixed in J, imho.

 A quick description of the problem:
 When the image-update is called (PUT /v2/images/%image_id%/) we get the
 image from the repository, which fetches a record from the DB and forms its
 content into an Image Domain Object ([1]), which is then modified (has its
 attributes updated) and passed through all the layers of our domain model.
 This object is not managed by the SQLAlchemy's session, so the
 modifications of its attributes are not tracked anywhere.
 When all the processing is done and the updated object is passed back to
 the DB repository, it serializes all the attributes of the image into a
 dict ([2]) and then this dict is used to create an UPDATE query for the
 database.
 As this serialization includes all the attribute of the object (rather
 then only the modified ones), the update query updates all the columns of
 the appropriate database row, putting there the values which were
 originally fetched when the processing began. This may obviously overwrite
 the values which could be written there by some other concurent request.

 There are two possible solutions to fix this problem.
 First, known as the optimistic concurrency control, checks if the
 appropriate database row was modified between the data fetching and data
 updates. In case of such modification the update operation reports a
 conflict and fails (and may be retried based on the updated data if
 needed). Modification detection is usually based on the timstamps, i.e. the
 query updates the row in database only if the timestamp there matches the
 timestamp of initially fetched data.
 I've introduced this approach in this patch [3], however it has a major
 flaw: I used the 'updated_at' attribute as a timestamp, and this attribute
 is mapped to a DateTime-typed column. In many RDBMS's (including
 MySql5.6.4) this column stores values with per-second precision and does
 not store fractions of seconds. So, even if patch [3] is merged the race
 conditions may still occur if there are many updates happening at the same
 moment of time.
 A better approach would be to add a new column with int (or longint) type
 to store millisecond-based (or even microsecond-based) timestamps instead
 of (or additionally to) date-time based updated_at. But data model
 modification will require to add new migration etc, which is a major step
 and I don't know if we want to make it so close to the release.

 The second solution is to keep track of the changed attributes and
 properties for the image and do not include the unchanged ones into the
 UPDATE query, so nothing gets overwritten. This dramatically reduces the
 threat of races, as the updates of different properties do not interfere
 with each other. Also this is a usefull change regardless of the race
 itself: being able to differentiate between changed and unchanged
 attributes may have its own value for other purposes; the DB performance
 will also be better when updating just the needed fields instead of all of
 them.
 I've submitted a patch with this approach as well [4], but it still breaks
 some unittests and I am working to fix them right now.

 So, we need to decide which of these approaches (or their combination) to
 take: we may stick with optimistic locking on timestamp (and then decide if
 we are ok with a per-second timestamps or we need to add a new column),
 choose to track state of attributes or combine them together. So, could you
 folks please review patches [3] and [4] and come up with some ideas on them?

 Also, probably we should consider targeting [0] to juno-rc1 milestone to
 make sure that this bug is fixed in J. Do you guys think it is possible at
 this stage?

 Thanks!


 [0] https://bugs.launchpad.net/glance/+bug/1371728
 [1]
 https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L74
 [2]
 https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L169
 [3] https://review.openstack.org/#/c/122814/
 [4] https://review.openstack.org/#/c/123722/

 --
 Regards,
 Alexander Tivelkov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Glance] PTL candidacy

2014-09-25 Thread Mark Washenberger
Thanks, Nikhil, for offering to take on this responsibility.

I know you've had a lot of experience with Glance in the past and I feel
comfortable knowing that you'll be around to keep the project moving
forwards!

Cheers!

On Thu, Sep 25, 2014 at 1:56 PM, Nikhil Komawar 
nikhil.koma...@rackspace.com wrote:

   Hi,

  I would like to take this opportunity and announce my candidacy for the
 role of Glance PTL.

  I have been part of this program since Folsom release and have had 
 opportunity
 to work with an awesome team. There have been really challenging changes in
 the way Glance works and it has been a pleasure to contribute my reviews
 and code to many of those changes.

  With the change in mission statement [1], that now provides a direction
 for other services to upload and discover data assets using Glance, it
 would be my focus to enable new features like 'Artifacts' to merge smoothly
 into master. This is a paradigm change in the way Glance is consumed and
 would be my priority to see this through. In addition, Glance is supporting a
 few new features like async workers and metadef, as of Juno that could be
 improved in terms of bugs and their maintainability. Seeing this through
 would be my next priority.

  In addition to these, there are a few other challenges which Glance
 project faces - review/feedback time, triaging ever growing bug list, BP
 'validation and followup' etc. I have some ideas to develop more momentum
 in each of these processes. With the advent of the Artifacts feature, new
 developers would be contributing to Glance. I would like to encourage and
 work with them become core members sooner than later. Also, there are many
 merge propositions which become stale due to lack of reviews from
 core-reviewers. My plan is to have bi-weekly sync-ups with the core and
 driver members to keep the review cycle active. As a good learning lesson
 from Juno, I would like to work closely with all the developers and
 involved core reviewers to know their sincere intent of accomplishing a
 feature within the scope of release timeline. There are some really
 talented people involved in Glance and I would like to keep synthesizing
 the ecosystem to enable everyone involved to do their best.

  Lastly, my salutations to Mark. He has provided great direction and
 leadership to this project. I would like to keep his strategy of rotation
 of weekly meeting times to accommodate the convenience of people from
 various time zones.

  Thanks for reading and I hope you will support my candidacy!

  [1]
 https://github.com/openstack/governance/blob/master/reference/programs.yaml#L26

  -Nikhil Komawar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] PTL Non-Candidacy

2014-09-22 Thread Mark Washenberger
Greetings,

I will not be running for PTL for Glance for the Kilo release.

I want to thank all of the nice folks I've worked with--especially the
attendees and sponsors of the mid-cycle meetups, which I think were a major
success and one of the highlights of the project for me.

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Mark Washenberger
On Fri, Sep 19, 2014 at 8:59 AM, Donald Stufft don...@stufft.io wrote:


 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org wrote:


 I don't think anyone would be complaining if glanceclient didn't have the
 need to reach into and monkeypatch requests's connection pool manager[1].
 Is there a way to tell requests to build the https connections differently
 without monkeypatching urllib3.poolmanager?

 glanceclient's monkeypatching of the global variable here is dangerous
 since it will mess with the application and every other library if the
 application or another library uses glanceclient.

 [1]
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75


 Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
 reimplementing that is probably more dangerous for security than requests
 bundling urllib3 ;)


We supported the option to skip SSL compression since before adopting
requests (see 556082cd6632dbce52ccb67ace57410d61057d66), useful when
uploading already compressed images.




 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Mark Washenberger
On Fri, Sep 19, 2014 at 11:26 AM, Chmouel Boudjnah chmo...@enovance.com
wrote:


 On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io wrote:

 So you can remove all that code and just let requests/urllib3 handle it
 on 3.2+, 2.7.9+ and for anything less than that either use conditional
 dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient,
 and pyasn1 on Python 2.x, or let them be optional and if people want to
 disable TLS compression in those versions they can install those versions
 themselves.



 we have that issue as well for swiftclient, see the great write-up from
 stuart here :

 https://answers.launchpad.net/swift/+question/196920

 just removing it this and let hope that users uses bleeding edge python
 (which they don't) is not going to work for us. and the pyOpenSSL way is
 very unfriendly to the end-user as well.

 Chmouel

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm very sympathetic with Chmouel's assessment, but it seems like adding
pyasn1 and ndg-httpsclient dependencies is relatively straightforward and
does not incur a substantial additional overhead on the install process. We
already depend on pyOpenSSL, which seems to be the main contributor to
glanceclient's list of unsavory dependencies. We would need to add
ndg-httpsclient to openstack/requirements, as well, but I assume that is
doable.

I'm a bit disappointed that even with the fix, the requests/urllib3 stack
is still trying to completely make this transport level decision for me.
Its fine to have defaults, but they should be able to be overridden.

For this release cycle, the best answer IMO is still just to switch to a
conditional import of requests.packages.urllib3 in our test module, which
was the original complaint.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-15 Thread Mark Washenberger
Hi there logging experts,

We've recently had a little disagreement in the glance team about the
appropriate log levels for http requests that end up failing due to user
errors. An example would be a request to get an image that does not exist,
which results in a 404 Not Found request.

On one hand, this event is an error, so DEBUG or INFO seem a little too
low. On the other hand, this error doesn't generally require any kind of
operator investigation or indicate any actual failure of the service, so
perhaps it is excessive to log it at WARN or ERROR.

Please provide feedback to help us resolve this dispute if you feel you can!

Thanks,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][FFE] Refactoring Glance Logging

2014-09-08 Thread Mark Washenberger
In principle I don't think these changes need FFE, because they aren't
really features so much as fixes for better logging and
internationalization.


On Mon, Sep 8, 2014 at 4:50 AM, Kuvaja, Erno kuv...@hp.com wrote:

  All,



 There is two changes still not landed from
 https://blueprints.launchpad.net/glance/+spec/refactoring-glance-logging



 https://review.openstack.org/116626



 and



 https://review.openstack.org/#/c/117204/



 Merge of the changes was delayed over J3 to avoid any potential merge
 conflicts. There was minor change made (couple of LOG.exceptions changed to
 LOG.error based on the review feedback) when rebased.



 I would like to request Feature Freeze Exception if needed to finish the
 Juno Logging refactoring and getting these two changes merged in.



 BR,

 Erno (jokke_) Kuvaja

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][FFE] glance_store switch-over and random access to image data

2014-09-05 Thread Mark Washenberger
I'm +1 on the FFE for both of these branches.


On Fri, Sep 5, 2014 at 8:51 AM, Flavio Percoco fla...@redhat.com wrote:

 On 09/05/2014 05:20 PM, Thierry Carrez wrote:
  Flavio Percoco wrote:
  Greetings,
 
  I'd like to request a FFE for 2 features I've been working on during
  Juno which, unfortunately, haven been delayed for different reasons
  during this time.
  [...]
 
  I would be inclined to give both a chance, but they really need to merge
  quickly, and the current Glance review velocity is not exactly feeding
  my hopes. +0 as far as I'm concerned, and definitely -1 if it takes more
  than one week.
 

 Agreed.

 Both patches are passing all tests. They just need to be reviewed.

 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-29 Thread Mark Washenberger
On Thu, Jul 24, 2014 at 9:48 AM, Scott Devoid dev...@anl.gov wrote:

 So it turns out that fixing this issue is not very simple. It turns out
 that there are stubbed out openstack.common.policy checks in the glance-api
 code, which are pretty much useless because they do not use the image as a
 target. [1] Then there's a chain of API / client calls where it's unclear
 who is responsible for validating ownership: python-glanceclient -
 glance-api - glance-registry-client - glance-registry-api -
 glance.db.sqlalchemy.api. Add to that the fact that request IDs are not
 consistently captured along the logging path [2] and it's a holy mess.

 I am wondering...
 1. Has anyone actually set owner_is_tenant to false? Has this ever been
 tested?


We haven't really been using or thinking about this as a feature, more a
potential backwards compatibility headache. I think it makes sense to just
go through the deprecation path so people aren't confused about whether
they should start using owner_is_tenant=False (they shouldn't).


 2. From glance developers, what kind of permissions / policy scenarios do
 you actually expect to work?


There is work going on now to support using images as targets. Of course,
the policy api wants enforce calls to only ever work with targets that are
dictionaries, which is a great way to race to the bottom in terms of
programming practices. But oh well.

Spec for supporting use of images as targets is here:
https://blueprints.launchpad.net/glance/+spec/restrict-downloading-images-protected-properties
https://github.com/openstack/glance-specs/blob/master/specs/juno/restrict-downloading-images.rst



 Right now we have one user who consistently gets an empty 404 back from
 nova image-list because glance-api barfs on a single image and gives up
 on the entire API request...and there are no non-INFO/DEBUG messages in
 glance logs for this. :-/

 ~ Scott

 [1] https://bugs.launchpad.net/glance/+bug/1346648
 [2] https://bugs.launchpad.net/glance/+bug/1336958


 On Fri, Jul 11, 2014 at 12:26 PM, Scott Devoid dev...@anl.gov wrote:

 Hi Alexander,

 I read through the artifact spec. Based on my reading it does not fix
 this issue at all. [1] Furthermore, I do not understand why the glance
 developers are focused on adding features like artifacts or signed images
 when there are significant usability problems with glance as it currently
 stands. This is echoing Sean Dague's comment that bugs are filed against
 glance but never addressed.

 [1] See the **Sharing Artifact** section, which indicates that sharing
 may only be done between projects and that the tenant owns the image.


 On Thu, Jul 3, 2014 at 4:55 AM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Thanks Scott, that is a nice topic

 In theory, I would prefer to have both owner_tenant and owner_user to be
 persisted with an image, and to have a policy rule which allows to specify
 if the users of a tenant have access to images owned by or shared with
 other users of their tenant. But this will require too much changes to the
 current object model, and I am not sure if we need to introduce such
 changes now.

 However, this is the approach I would like to use in Artifacts. At least
 the current version of the spec assumes that both these fields to be
 maintained ([0])

 [0]
 https://review.openstack.org/#/c/100968/4/specs/juno/artifact-repository.rst

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jul 3, 2014 at 3:44 AM, Scott Devoid dev...@anl.gov wrote:

  Hi folks,

 Background:

 Among all services, I think glance is unique in only having a single
 'owner' field for each image. Most other services include a 'user_id' and a
 'tenant_id' for things that are scoped this way. Glance provides a way to
 change this behavior by setting owner_is_tenant to false, which implies
 that owner is user_id. This works great: new images are owned by the user
 that created them.

 Why do we want this?

 We would like to make sure that the only person who can delete an image
 (besides admins) is the person who uploaded said image. This achieves that
 goal nicely. Images are private to the user, who may share them with other
 users using the image-member API.

 However, one problem is that we'd like to allow users to share with
 entire projects / tenants. Additionally, we have a number of images (~400)
 migrated over from a different OpenStack deployment, that are owned by the
 tenant and we would like to make sure that users in that tenant can see
 those images.

 Solution?

 I've implemented a small patch to the is_image_visible API call [1]
 which checks the image.owner and image.members against context.owner and
 context.tenant. This appears to work well, at least in my testing.

 I am wondering if this is something folks would like to see integrated?
 Also for glance developers, if there is a cleaner way to go about solving
 this problem? [2]

 ~ Scott

 [1]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
 

Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-29 Thread Mark Washenberger
On Mon, Jul 28, 2014 at 8:12 AM, Tailor, Rajesh rajesh.tai...@nttdata.com
wrote:

 Hi All,

 I have submitted the patch Made provision for glance service to use
 Launcher to the community gerrit.
 Pl refer: https://review.openstack.org/#/c/110012/

 I have also set the workflow to 'work in progress'. I will start working
 on writing unit tests for the proposed
 changes, after positive feedback for the same.

 Could you please give your comments on this.

 Could you also please suggest me whether to file a launchpad bug or a
 blueprint,  to propose these changes in the glance project ?


Submitting to github.com/openstack/glance-specs would be best. Thanks.



 Thanks,
 Rajesh Tailor

 -Original Message-
 From: Tailor, Rajesh [mailto:rajesh.tai...@nttdata.com]
 Sent: Wednesday, July 23, 2014 12:13 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
 glance

 Hi Jay,
 Thank you for your response.
 I will soon submit patch for the same.

 Thanks,
 Rajesh Tailor

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Tuesday, July 22, 2014 8:07 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
 glance

 On 07/17/2014 03:07 AM, Tailor, Rajesh wrote:
  Hi all,
 
  Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for
  its wsgi service like it is used in other openstack projects i.e.
  nova, cinder, keystone etc.

 Glance uses the same WSGI service launch code as the other OpenStack
 project from which that code was copied: Swift.

  As of now when SIGHUP signal is sent to glance-api parent process, it
  calls the callback handler and then throws OSError.
 
  The OSError is thrown because os.wait system call was interrupted due
  to SIGHUP callback handler.
 
  As a result of this parent process closes the server socket.
 
  All the child processes also gets terminated without completing
  existing api requests because the server socket is already closed and
  the service doesn't restart.
 
  Ideally when SIGHUP signal is received by the glance-api process, it
  should process all the pending requests and then restart the
  glance-api service.
 
  If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it
  will handle service restart on 'SIGHUP' signal properly.
 
  Can anyone please let me know what will be the positive/negative
  impact of using Launcher/ProcessLauncher (oslo-incubator) in glance?

 Sounds like you've identified at least one good reason to move to
 oslo-incubator's Launcher/ProcessLauncher. Feel free to propose patches
 which introduce that change to Glance. :)

  Thank You,
 
  Rajesh Tailor
  __
  Disclaimer:This email and any attachments are sent in strictest
  confidence for the sole use of the addressee and may contain legally
  privileged, confidential, and proprietary data. If you are not the
  intended recipient, please advise the sender by replying promptly to
  this email and then delete and destroy this email and any attachments
  without any further use, copying or forwarding

 Please advise your corporate IT department that the above disclaimer on
 your emails is annoying, is entirely disregarded by 99.999% of the real
 world, has no legal standing or enforcement, and may be a source of
 problems with people's mailing list posts being sent into spam boxes.

 All the best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data.  If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data.  If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-28 Thread Mark Washenberger
I think there is some confusion about what the glance metadata api is going
to do.

We are *not* planning to store metadata about other openstack resources in
glance.

We *are* planning to store definitions of the relevant schemas of metadata
for other classes of openstack resources.

For example, if somebody adds a feature to Nova or a hypervisor driver to
deliver clowns and rainbows whenever you boot a flavor with
extra_specs:clowns_and_rainbows = yes please, the metadata catalog will
allow users to discover that property, read its description, learn the
schema of possible values for this key, and learn the related keys that
could be applied on images, volumes, volume image metadata, etc.

If someone wants to make a general store that can be used to store actual
metadata, as opposed to just the definitions for metadata, they have my
support. But given the mission of the Glance Program at this point, such a
service probably does not belong in Glance.


On Thu, Jul 24, 2014 at 3:11 PM, Tim Simpson tim.simp...@rackspace.com
wrote:

  I agree as well.

  I think we should spend less time worrying about what other projects in
 OpenStack might do in the future and spend more time on adding the features
 we need today to Trove. I understand that it's better to work together but
 too often we stop progress on something in Trove to wait on a feature in
 another project that is either incomplete or merely being planned.

  While this stems from our strong desire to be part of the community,
 which is a good thing, it hasn't actually led many of us to do work for
 these other projects. At the same time, its negatively impacted Trove. I
 also think it leads us to over-design or incorrectly design features as we
 plan for functionality in other projects that may never materialize in the
 forms we expect.

  So my vote is we merge our own metadata feature and not fret over how
 metadata may end up working in Glance.

  Thanks,

  Tim

  --
 *From:* Iccha Sethi [iccha.se...@rackspace.com]
 *Sent:* Thursday, July 24, 2014 4:02 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Glance][Trove] Metadata Catalog

   +1

  We are unsure when these changes will get into glance.
 IMO we should go ahead will our instance metadata patch for now and when
 things are ready in glance land we can consider migrating to using that as
 a generic metadata repository.

  Thanks,
 Iccha

   From: Craig Vyvial cp16...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, July 24, 2014 at 3:04 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

   Denis,

  The scope of the metadata api goes beyond just using the glance
 metadata. The metadata can be used for instances and and other objects to
 add extra data like tags or something else that maybe a UI might want to
 use. We need this feature either way.

  -Craig


 On Thu, Jul 24, 2014 at 12:17 PM, Amrith Kumar amr...@tesora.com wrote:

  Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this;
 “Metadata” is a very generic term and the meaning of “metadata” in a
 database context is very different from the meaning of “metadata” in the
 context that Glance is providing.



 Furthermore the usage and access pattern for this metadata, the frequency
 of change, and above all the frequency of access are fundamentally
 different between Trove and what Glance appears to be offering, and we
 should probably not get too caught up in the project “title”.



 We would not be “reinventing the wheel” if we implemented an independent
 metadata scheme for Trove; we would be implementing the right kind of when
 for the vehicle that we are operating. Therefore I do not agree with your
 characterization that concludes that:



  given goals at [1] are out of scope of Database program, etc



 Just to be clear, when you write:



  Unfortunately, we’re(Trove devs) are on half way to metadata …



 it is vital to understand that our view of “metadata” is very different
 from (for example, a file system’s view of metadata, or potentially
 Glance’s view of metadata). For that reason, I believe that your comments
 on https://review.openstack.org/#/c/82123/16 are also somewhat extreme.



 Before postulating a solution (or “delegating development to Glance
 devs”), it would be more useful to fully describe the problem being solved
 by Glance and the problem(s) we are looking to solve in Trove, and then we
 could have a meaningful discussion about the right solution.



 I submit to you that we will come away concluding that there is a round
 peg, and a square hole. Yes, one will fit in the other but the final
 product will leave neither party particularly happy with the end result.



 -amrith



 *From:* Denis Makogon 

Re: [openstack-dev] ova support in glance

2014-07-26 Thread Mark Washenberger
Thanks for sending out this message Malini.

I'm really pleased that the image import mechanism we've been working on
in Glance for a while is going to be helpful for supporting this kind of
use case.

The problem that I see is one of messaging. If we tell end users that
OpenStack can import and run OVAs I think we're probably setting
ourselves up for a serious problem with expectations. Since an OVA is *not*
an image, and actually could be much broader in scope or more constrained,
I'm worried that this import will fail for most users most of the time.
This just creates a negative impression of our cloud, and may cause a
significant support headache for some of our deployers.

The plan I propose to respond to this challenge is as follows:

1) develop the initial OVA image import out of tree
- the basic functionality is just to grab the root disk out of the ova
and to set image properties based on some of the ovf metadata
2) assess what the median level of OVA complexity is out there in the wild
among OVA users
3) make sufficient progress with artifacts to ensure we can cover the
median level of OVA complexity in an OpenStack accessible way
- openstack accessible to me means there probably has to be qemu-image
/ libvirt / heat support for a given OVA concept
4) Bring OVA import into the main tree as part of the General Import [1]
operation once that artifact progress has been made

However, I'm very interested to know if there are some folks more embedded
with operators and deployers who can reassure me that this OVA messaging
problem can be dealt with another way.

Thanks!


[1] As a reminder, the General Import item on our hazy future backlog is
different from Image Import in the following way. For an image import,
you are explicitly trying to create an image. For the general import, you
show up to the cloud with some information and just ask for it to be
imported, the import task itself will inspect the data you provide to
determine what, if anything, can be created for it. This works well for
OVAs because we may want to produce a disk image, a block device mapping
artifact, or even up to the level of a heat template.


On Fri, Jul 25, 2014 at 7:08 PM, Bhandaru, Malini K 
malini.k.bhand...@intel.com wrote:

 Hello Everyone!

 We were discussing the following blueprint in Glance:
 Enhanced-Platform-Awareness-OVF-Meta-Data-Import :
 https://review.openstack.org/#/c/104904/

 The OVA format is very rich and the proposal here in its first incarnation
 is to essentially
 Untar the ova package, andimport the first disk image therein and parse
 the ovf file and attach meta data to the disk image.
 There is a nova effort  in a similar vein that supports OVA, limiting its
 availability to the VMWare hypervisor. Our efforts will combine.

 The issue that is raised is how many openstack users and OpenStack cloud
 providers tackle OVA data with multiple disk images, using them as an
 application.
 Do your users using OVA with content other than 1 disk image + OVF?
 That is does it have other files that are used? Do any of you use OVAs
 with snapshot chains?
 Would this solution path break your system, result in unhappy users?


 If the solution will at least address 50% of the use cases, a low bar, and
 ease deploying NFV applications, this would be worthy.
 If so, how would we message around this so as not to imply that OpenStack
 supports OVA in its full glory?

 Down the road the Artefacts blueprint will provide a place holder for OVA.
 Perhaps even the OVA format may be transformed into a Heat template to work
 in OpenStack.

 Please do prov ide us your feedback.
 Regards
 Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Mid Cycle Meetup: Date and Location Set

2014-06-13 Thread Mark Washenberger
Hi folks,

In yesterday's team meeting, we picked July 24-25 in the San Francisco Bay
Area for our meetup time and place. More details will be added here:
https://etherpad.openstack.org/p/glance-juno-mid-cycle-meeting

This positions our meeting just a few days before the Nova meeting in
Portland, for those who were hoping to attend both.

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Mark Washenberger
Hi folks,

I'd like to nominate Nikhil Komawar to join glance-core. His code and
review contributions over the past years have been very helpful and he's
been taking on a very important role in advancing the glance tasks work.

If anyone has any concerns, please let me know. Otherwise I'll make the
membership change next week (which is code for, when someone reminds me to!)

Thanks!
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-11 Thread Mark Washenberger
I think the tasks stuff is something different, though. A task is a
(potentially) long-running operation. So it would be possible for an action
to result in the creation of a task. As the proposal stands today, the
actions we've been looking at are an alternative to the document-oriented
PATCH HTTP verb. There was nearly unanimous consensus that we found POST
/resources/actions/verb {inputs to verb} to be a more expressive and
intuitive way of accomplishing some workflows than trying to use JSON-PATCH
documents.


On Tue, Jun 10, 2014 at 4:15 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, Jun 4, 2014 at 11:54 AM, Sean Dague s...@dague.net wrote:

 On 05/30/2014 02:22 PM, Hemanth Makkapati wrote:
  Hello All,
  I'm writing to notify you of the approach the Glance community has
  decided to take for doing functional API.  Also, I'm writing to solicit
  your feedback on this approach in the light of cross-project API
  consistency.
 
  At the Atlanta Summit, the Glance team has discussed introducing
  functional API in Glance so as to be able to expose operations/actions
  that do not naturally fit into the CRUD-style. A few approaches are
  proposed and discussed here
  
 https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api
 .
  We have all converged on the approach to include 'action' and action
  type in the URL. For instance, 'POST
  /images/{image_id}/actions/{action_type}'.
 
  However, this is different from the way Nova does actions. Nova includes
  action type in the payload. For instance, 'POST
  /servers/{server_id}/action {type: action_type, ...}'. At this
  point, we hit a cross-project API consistency issue mentioned here
  
 https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis
 
  (under the heading 'How to act on resource - cloud perform on
  resources'). Though we are differing from the way Nova does actions and
  hence another source of cross-project API inconsistency , we have a few
  reasons to believe that Glance's way is helpful in certain ways.
 
  The reasons are as following:
  1. Discoverability of operations.  It'll be easier to expose permitted
  actions through schemas a json home document living at
  /images/{image_id}/actions/.
  2. More conducive for rate-limiting. It'll be easier to rate-limit
  actions in different ways if the action type is available in the URL.
  3. Makes more sense for functional actions that don't require a request
  body (e.g., image deactivation).
 
  At this point we are curious to see if the API conventions group
  believes this is a valid and reasonable approach.
  Any feedback is much appreciated. Thank you!

 Honestly, I like POST /images/{image_id}/actions/{action_type} much
 better than ACTION being embedded in the body (the way nova currently
 does it), for the simple reason of reading request logs:


 I agree that not including the action type in the POST body is much nicer
 and easier to read in logs, etc.

 That said, I prefer to have resources actually be things that the software
 creates. An action isn't created. It is performed.

 I would prefer to replace the term action(s) with the term task(s), as
 is proposed for Nova [1].

 Then, I'd be happy as a pig in, well, you know.

 Best,
 -jay

 [1] https://review.openstack.org/#/c/86938/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-05 Thread Mark Washenberger
On Thu, Jun 5, 2014 at 1:43 AM, Kuvaja, Erno kuv...@hp.com wrote:

  Hi,



 +1 for the mission statement, but indeed why 2 changes?


I thought perhaps this way is more explicit. First we're adopting a
straightforward mission statement which has been lacking for some time.
Next we're proposing a new mission in line with our broader aspirations.
However I'm quite happy to squash them together if folks prefer.




 -  Erno (jokke)



 *From:* Mark Washenberger [mailto:mark.washenber...@markwash.net]
 *Sent:* 05 June 2014 02:04
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Glance] [TC] Program Mission Statement and
 the Catalog
 *Importance:* High



 Hi folks,



 I'd like to propose the Images program to adopt a mission statement [1]
 and then change it to reflect our new aspirations of acting as a Catalog
 that works with artifacts beyond just disk images [2].



 Since the Glance mini summit early this year, momentum has been building
 significantly behind catalog effort and I think its time we recognize it
 officially, to ensure further growth can proceed and to clarify the
 interactions the Glance Catalog will have with other OpenStack projects.



 Please see the linked openstack/governance changes, and provide your
 feedback either in this thread, on the changes themselves, or in the next
 TC meeting when we get a chance to discuss.



 Thanks to Georgy Okrokvertskhov for coming up with the new mission
 statement.



 Cheers

 -markwash



 [1] - https://review.openstack.org/98001

 [2] - https://review.openstack.org/98002



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-04 Thread Mark Washenberger
I will provide a little more context for the TC audience. I asked Hemanth
to tag this message [TC] because at the Juno summit in the cross-project
track there was discussion of cross-project api consistency [1]. The main
outcome of that meeting was that TC should recommend API conventions via
openstack/governance as defined by those interested in the community. If
you dig further into that etherpad, I believe there is a writeup of
actions but I don't think we actually found time to hit that point during
the discussion.

Thanks!


[1] -
https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis


On Fri, May 30, 2014 at 11:22 AM, Hemanth Makkapati 
hemanth.makkap...@rackspace.com wrote:

  Hello All,
 I'm writing to notify you of the approach the Glance community has decided
 to take for doing functional API.  Also, I'm writing to solicit your
 feedback on this approach in the light of cross-project API consistency.

 At the Atlanta Summit, the Glance team has discussed introducing
 functional API in Glance so as to be able to expose operations/actions that
 do not naturally fit into the CRUD-style. A few approaches are proposed and
 discussed here
 https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api.
 We have all converged on the approach to include 'action' and action type
 in the URL. For instance, 'POST /images/{image_id}/actions/{action_type}'.

 However, this is different from the way Nova does actions. Nova includes
 action type in the payload. For instance, 'POST /servers/{server_id}/action
 {type: action_type, ...}'. At this point, we hit a cross-project API
 consistency issue mentioned here
 https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis
 (under the heading 'How to act on resource - cloud perform on resources').
 Though we are differing from the way Nova does actions and hence another
 source of cross-project API inconsistency , we have a few reasons to
 believe that Glance's way is helpful in certain ways.


 The reasons are as following:
 1. Discoverability of operations.  It'll be easier to expose permitted
 actions through schemas a json home document living at
 /images/{image_id}/actions/.
 2. More conducive for rate-limiting. It'll be easier to rate-limit actions
 in different ways if the action type is available in the URL.
 3. Makes more sense for functional actions that don't require a request
 body (e.g., image deactivation).

 At this point we are curious to see if the API conventions group believes
 this is a valid and reasonable approach.

 Any feedback is much appreciated. Thank you!

 Regards,
 Hemanth Makkapati

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-04 Thread Mark Washenberger
Hi folks,

I'd like to propose the Images program to adopt a mission statement [1] and
then change it to reflect our new aspirations of acting as a Catalog that
works with artifacts beyond just disk images [2].

Since the Glance mini summit early this year, momentum has been building
significantly behind catalog effort and I think its time we recognize it
officially, to ensure further growth can proceed and to clarify the
interactions the Glance Catalog will have with other OpenStack projects.

Please see the linked openstack/governance changes, and provide your
feedback either in this thread, on the changes themselves, or in the next
TC meeting when we get a chance to discuss.

Thanks to Georgy Okrokvertskhov for coming up with the new mission
statement.

Cheers
-markwash

[1] - https://review.openstack.org/98001
[2] - https://review.openstack.org/98002
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Mid Cycle Meetup Survey

2014-05-15 Thread Mark Washenberger
Hi Folks!

Ashwhini has put together a great survey to help us plan our Glance mid
cycle meetup. Please fill it out if you think you might be interested in
attending! In particular we're trying to figure out sponsorship and
location. If you have no location preference, feel free to leave those
check boxes blank.

https://docs.google.com/forms/d/1rygMU1fXcBYn9_NgvEtjoCXlRQtlIA1UCqsQByxbTA8/viewform

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Call for alternate session leader for Signed Images

2014-05-07 Thread Mark Washenberger
Hi folks,

Unfortunately, the leader for one of our proposed sessions is now unable to
attend the summit. The topic in question is Signed Images [1] and was
allocated a half-session slot. This is a call out to see if there are any
other folks who would like to lead this discussion. If not, no big deal. We
will have other things to discuss during that time.

Thanks!


[1] - http://summit.openstack.org/cfp/details/79
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] design summit sessions

2014-04-25 Thread Mark Washenberger
The first draft of the Glance design summit session have been posted at
http://junodesignsummit.sched.org/overview/type/glance. We may still
shuffle the times and the exact split of the topics around a bit if there
are opportunities for improvement.

I would like to ask at this time if key contributors to these sessions
would please let me know if this schedule creates any significant time
conflicts for you.

Otherwise, session leads, please start assembling etherpads for your
sessions and invite comments from some other Glance folks. Our goal at this
point is to ensure that our design summit sessions are as engaging,
relevant, and productive as possible. We will discuss progress on this
front at the upcoming glance team meeting. [1]

Thanks!


[1] -
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20140501T20ah=1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Announcing glance-specs repo

2014-04-25 Thread Mark Washenberger
Hey hey glancy glance,

Recently glance drivers made a somewhat snap decision to adopt the -specs
gerrit repository approach for new blueprints.

Pursuant to that, Arnaud has been kind enough to put forward some infra
patches to set things up. After the patches to create the repo [1] and
enable tests [2] land, we will need one more patch to add the base
framework to the glance-specs repo, so there is a bit of time needed before
people will be able to submit their specs.

I'd like to see us use this system for Juno blueprints. I think it would
also be very helpful if any blueprints being discussed at the design summit
could adopt this format in time for review prior to the summit (which is
just over two weeks away). I understand that this is all a bit late in the
game to make such requirements, so obviously we'll try to be very
understanding of any difficulties.

Additionally, if any glance folks have serious reservations about adopting
the glance-specs repo, please speak up now.

Thanks again to Arnaud for spearheading this effort. And thanks to the Nova
crew for paving a nice path for us to follow.

Cheers,
markwash


[1] - https://review.openstack.org/#/c/90461/
[2] - https://review.openstack.org/#/c/90469/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Ideas needed for v2 registry testing

2014-04-18 Thread Mark Washenberger
Hi Erno,

Just looking for a little more information here. What are the particular
areas around keystone integration in the v2 api+registry stack that you
want to test? Is the v2 api + v2 registry stack using keystone differently
than how v1 api + v1 registry stack uses it?

Thanks


On Fri, Apr 18, 2014 at 6:35 AM, Erno Kuvaja kuv...@hp.com wrote:

 Hi all,

 I have been trying to enable functional testing for Glance API v2 using
 data_api = glance.db.registry.api without great success.

 The current functionality of the v2 api+reg relies on the fact that
 keystone is used and our current tests does not facilitate that expectation.

 I do not like either option I have managed to come up with so now is time
 to call for help. Currently only way I see we could run the registry tests
 is to convert our functional tests using keystone instead of noauth or
 write test suite that passes API server and targets the registry directly.
 Neither of these are great as starting keystone would make the already long
 taking functional tests even longer and more resource hog on top of that we
 would have a need to pull in keystone just to run glance tests; on the
 other hand bypassing API server would not give us any guarantee that the
 behavior of the glance is the same regardless which data_api is used.

 At this point any ideas/discussion would be more than welcome how we can
 make these tests running on both configurations.

 Thanks,
 Erno

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Mark Washenberger
Hi Saju,

VM imports are likely to show up in Glance under this blueprint:
https://blueprints.launchpad.net/glance/+spec/new-upload-workflow

Cheers,
markwash


On Mon, Apr 7, 2014 at 12:06 AM, Saju M sajup...@gmail.com wrote:

 Hi,

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please point
 me to the blueprint. I would like to work on that.


 Regards
 Saju Madhavan
 +91 09535134654

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Mark Washenberger
On Thu, Apr 3, 2014 at 10:50 AM, Keith Bray keith.b...@rackspace.comwrote:

  Steve, agreed.  Your description I believe is the conclusion that the
 community came to when this was perviously discussed, and we managed to get
 the implementation of parameter grouping and ordering [1] that you
 mentioned which has been very helpful.  I don't think we landed the
 keywords blueprint [2], which may be controversial because it is
 essentially unstructured. I wanted to make sure Mike had the links for
 historical context, but certainly understand and appreciate your point of
 view here.  I wasn't able to find the email threads to point Mike to, but
 assume they exist in the list archives somewhere.

  We proposed another specific piece of template data [3] which I can't
 remember whether it was met with resistance or we just didn't get to
 implementing it since we knew we would have to store other data specific to
 our uses cases in other files anyway.   We decided to go with storing our
 extra information in a catalog (really just a Git repo with a README.MD[4]) 
 for now  until we can implement acceptable catalog functionality
 somewhere like Glance, hopefully in the Juno cycle.  When we want to share
 the template, we share all the files in the repo (inclusive of the
 README.MD).  It would be more ideal if we could share a single file
 (package) inclusive of the template and corresponding help text and any
 other UI hint info that would helpful.  I expect service providers to have
 differing views of the extra data they want to store with a template... So
 it'd just be nice to have a way to account for service providers to store
 their unique data along with a template that is easy to share and is part
 of the template package.  We bring up portability and structured data
 often, but I'm starting to realize that portability of a template breaks
 down unless every service provider runs exactly the same Heat resources,
 same image IDs, flavor types, etc.). I'd like to drive more standardization
 of data for image and template data into Glance so that in HOT we can just
 declare things like Linux, Flavor Ubuntu, latest LTS, minimum 1Gig and
 automatically discover and choose the right image to provision, or error if
 a suitable match can not be found.


Yes, this is exactly the use case that has been driving our consideration
of the artifacts resource in Glance.

You mentioned discovery of compatible resources. I think its an important
use case, but I think the export and import approach can also be very
useful and I'd like to believe it is the general solution to cloud
portability.


  The Murano team has been hinting at wanting to solve a similar problem,
 but with a broader vision from a complex-multi application declaration
 perspective that crosses multiple templates or is a layer above just
 matching to what capabilities Heat resources provide and matching against
 capabilities that a catalog of templates provide (and mix that with
 capabilities the cloud API services provide).  I'm not yet convinced that
 can't be done with a parent Heat template since we already have the
 declarative constructs and language well defined, but I appreciate the use
 case and perspective those folks are bringing to the conversation.

  [1]
 https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
  https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering

  [2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
 https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

  [3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
 https://wiki.openstack.org/wiki/Heat/UI#Help_Text

  [4] Ex. Help Text accompanying a template in README.MD format:
 https://github.com/rackspace-orchestration-templates/docker

  -Keith

   From: Steven Dake sd...@redhat.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, April 3, 2014 10:30 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [heat] metadata for a HOT

   On 04/02/2014 08:41 PM, Keith Bray wrote:

 https://wiki.openstack.org/wiki/Heat/StackMetadata

  https://wiki.openstack.org/wiki/Heat/UI

  -Keith

  Keith,

 Taking a look at the UI specification, I thought I'd take a look at adding
 parameter grouping and ordering to the hot_spec.rst file.  That seems like
 a really nice constrained use case with a clear way to validate that folks
 aren't adding magic to the template for their custom environments.  During
 that, I noticed it is is already implemented.

 What is nice about this specific use case is it is something that can be
 validated by the parser.  For example, the parser could enforce that
 parameters in the parameter-groups section actually exist as parameters in
 the parameters section.  Essentially this particular use case *enforces*
 good heat template 

Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-17 Thread Mark Washenberger
On Thu, Mar 13, 2014 at 12:42 PM, Boris Pavlovic bpavlo...@mirantis.comwrote:

 Hi stackers,

 As a result of discussion:
 [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
 (step by step)
 http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

 I understood that there should be another proposal. About how we should
 implement Restorable  Delayed Deletion of OpenStack Resource in common way
  without these hacks with soft deletion in DB.  It is actually very
 simple, take a look at this document:


 https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Boris,

Before I voice a little disagreement, I'd like to thank you for kicking off
this discussion and stress that I strongly agree with your view (pulled
from the other thread)

 To put in a nutshell: Restoring Delete resources / Delayed Deletion !=
Soft deletion.

This is absolutely correct and the key to unlocking the problem we have.

However, because of migrations and because being explicit is better than
being implicit, I disagree about the idea of lumping deleted resources all
into the same table. For glance, I'd much rather have a table
deleted_images than a table deleted_resources that has some image
entries. There are a number of reasons, I'll try to give a quick high-level
view of them.

1) Migrations for deleted data are more straightforward and more obviously
necessary.
2) It is possible to make specific modifications to the deleted_X schema.
3) It is possible to take many tables that are used to represent a single
active resource (images, image_locations, image_tags, image_properties) and
combine them into a single table for a deleted resource. This is actually
super important as today we have the problem of not always knowing what
image_properties were actually deleted prior to the image deletion vs the
ones that were deleted as a part of the image deletion.
4) It makes it a conscious choice to decide to have certain types of
resources restorable or have delayed deletes. As you said before, many
types of resources just don't need this functionality, so let's not make it
a feature of the common base class.

(I am assuming for #2 and #3 that this common approach would be implemented
something like deleted_resource['data'] =
json.dumps(dict(active_resource)), sorry if that is seriously incorrect.)

Thanks for your consideration,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Mark Washenberger
Hi Anna,


On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland annas...@us.ibm.comwrote:

 [A] The current keystone LDAP community driver returns all users that
 exist in LDAP via the API call v3/users, instead of returning just users
 that have role grants (similar processing is true for groups). This could
 potentially be a very large number of users. We have seen large companies
 with LDAP servers containing hundreds and thousands of users. We are aware
 of the filters available in keystone.conf ([ldap].user_filter and
 [ldap].query_scope) to cut down on the number of results, but they do not
 provide sufficient filtering (for example, it is not possible to set
 user_filter to members of certain known groups for OpenLDAP without
 creating a memberOf overlay on the LDAP server).

 [Nathan Kinder] What attributes would you filter on?  It seems to me that
 LDAP would need to have knowledge of the roles to be able to filter based
 on the roles.  This is not necessarily the case, as identity and assignment
 can be split in Keystone such that identity is in LDAP and role assignment
 is in SQL.  I believe it was designed this way to deal with deployments
 where LDAP already exists and there is no need (or possibility) of adding
 role info into LDAP.

 [A] That's our main use case. The users and groups are in LDAP and role
 assignments are in SQL.
 You would filter on role grants and this information is in SQL backend. So
 new API would need to query both identity and assignment drivers.


From my perspective, it seems there is a chicken-and-egg problem with this
proposal. If a user doesn't have a role assigned, the user does not show up
in the list. But if the user doesn't show up in the list, the user doesn't
exist. If the user doesn't exist, you cannot add a role to it.

Perhaps what is needed is just some sort of filter to listing users that
only returns users with a role in the cloud?




 [Nathan Kinder] Without filtering based on a role attribute in LDAP, I
 don't think that there is a good solution if you have OpenStack and
 non-OpenStack users mixed in the same container in LDAP.
 If you want to first find all of the users that have a role assigned to
 them in the assignments backend, then pull their information from LDAP, I
 think that you will end up with one LDAP search operation per user. This
 also isn't a very scalable solution.

 [A] What was the reason the LDAP driver was written this way, instead of
 returning just the users that have OpenStack-known roles? Was the creation
 of a separate API for this function considered?
 Are other exploiters of OpenStack (or users of Horizon) experiencing this
 issue? If so, what was their approach to overcome this issue? We have been
 prototyping a keystone extension that provides an API that provides this
 filtering capability, but it seems like a function that should be generally
 available in keystone.

 [Nathan Kinder] I'm curious to know how your prototype is looking to
 handle this.

 [A] The prototype basically first calls assignment API
 list_role_assignments() to get a list of users and groups with role grants.
 It then iterates the retrieved list and calls identity API
 list_users_in_group() to get the list of users in these groups with grants
 and get_user() to get users that have role grants but do not belong to the
 groups with role grants (a call for each user). Both calls ignore groups
 and users that are not found in the LDAP registry but exist in SQL (this
 could be the result of a user or group being removed from LDAP, but the
 corresponding role grant was not revoked). Then the code removes duplicates
 if any and returns the combined list.

 The new extension API is /v3/my_new_extension/users. Maybe the better
 naming would be v3/roles/users (list users with any role) - compare to
 existing v3/roles/{role_id}/users  (list users with a specified role).

 Another alternative that we've tried is just a new identity driver that
 inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides
 just the list_users() function. That's probably not the best approach from
 OpenStack standards point of view but I would like to get community's
 feedback on whether this is acceptable.


 I've posted this question to openstack-security last week but could not
 get any feedback after Nathan's first reply. Reposting to openstack-dev..



 Anna Sortland
 Cloud Systems Software Development
 IBM Rochester, MN
 annas...@us.ibm.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Need to revert Don't enable all stores by default

2014-03-12 Thread Mark Washenberger
On Wed, Mar 12, 2014 at 6:40 AM, Sean Dague s...@dague.net wrote:

 On 03/12/2014 09:01 AM, Flavio Percoco wrote:
  On 11/03/14 16:25 -0700, Clint Byrum wrote:
  Hi. I asked in #openstack-glance a few times today but got no response,
  so sorry for the list spam.
 
  https://review.openstack.org/#/c/79710/
 
  This change introduces a backward incompatible change to defaults with
  Havana. If a user has chosen to configure swift, but did not add swift
  to the known_stores, then when that user upgrades Glance, Glance will
  fail to start because their swift configuration will be invalid.
 
  This broke TripleO btw, which tries hard to use default configurations.
 
  Also I am not really sure why this approach was taken. If a user has
  explicitly put swift configuration options in their config file, why
  not just load swift store? Oslo.config will help here in that you can
  just add all of the config options but not actually expect them to be
  set. It seems entirely backwards to just fail in this case.
 
 
  Here's an attempt to fix this issues without reverting the patch.
  Feedback appreciated.
 
  https://review.openstack.org/#/c/79935/

 ACK. Looks pretty good. You might want to consider using one of the oslo
 deprecation functions to make it consistent on the deprecation side.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sorry, I suppose I should have interrogated the backwards-incompatibility
assumptions people were making about this change a bit more.

It looks like the latest patch is a great deprecation mechanism. Thanks for
working out a solution, Flavio et al.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-12 Thread Mark Washenberger
Hi folks,

I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
cycle his reviews have been consistently high quality and I feel confident
in his ability to assess the design of new features and the overall
direction for Glance.

If anyone has any concerns, please share them with me. If I don't hear any,
I'll make the membership change official in about a week.

Thanks for your consideration. And thanks for all your hard work, Arnaud!

markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-03 Thread Mark Washenberger
On Sat, Mar 1, 2014 at 12:51 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-02-28 at 15:25 -0800, Mark Washenberger wrote:
  I believe we have some agreement here. Other openstack services should
  be able to use a strongly typed identifier for users. I just think if
  we want to go that route, we probably need to create a new field to
  act as the proper user uuid, rather than repurposing the existing
  field. It sounds like many existing LDAP deployments would break if we
  repurpose the existing field.

 Hi Mark,

 Please see my earlier response on this thread. I am proposing putting
 external identifiers into a mapping table that would correlate a
 Keystone UUID user ID with external identifiers (of variable length).


The thing you seem to be missing is that the current user-id attribute is
an external identifier depending on the identity backend you're using
today. For example in the LDAP driver it is the CN by default (which is
ridiculous for a large number of reasons, but let's leave those aside.) So
if you want to create a new, strongly typed internal uuid identifier that
makes the db performance scream, more power to you. But it's going to have
to be a new field.




 Once authentication has occurred (with any auth backend including LDAP),
 Keystone would only communicate to the other OpenStack services the UUID
 user ID from Keystone. This would indeed require a migration to each
 non-Keystone service that stores the user IDs as-is from Keystone
 currently (such as Glance or Nova).

 Once the migrations are run, then only UUID values would be stored, and
 further migrations could be run that would streamline the columns that
 stores these user IDs to a more efficient CHAR(32) or BINARY(16)
 internal storage format.

 Hope that clears things up.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).

 -1

 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.


 Morgan and I talked this suggestion through last night and agreed it's
 probably the best approach, and has the benefit of zero impact on other
 services, which is something we're obviously trying to avoid. I imagine it
 could be as simple as a user_id to domain_id lookup table. All we really
 care about is given a globally unique user ID, which identity backend is
 the user from?

 On the downside, it would likely become bloated with unused ephemeral user
 IDs, so we'll need enough metadata about the mapping to implement a purging
 behavior down the line.


Is this approach planning on reusing the existing user-id field, then? It
seems like this creates a migration problem for folks who are currently
using user-ids that are generated by their identity backends.





 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.

  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.

 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.

 Best,
 -jay

  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  --
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash hen...@linux.vnet.ibm.comwrote:

 Hi Mark,

 So we would not modify any existing IDs, so no migration required.


Okay, I just want to be painfully clear--we're not proposing changing any
of the current restrictions on the user-id field. We will not:
  - require it to be a uuid
  - encode it as binary instead of char
  - shrink its size below the current 64 characters

Any of those could require a migration for existing IDs depending on how
your identity driver functions.

If I'm just being Chicken Little, please reassure me once more and I'll be
quiet :-)




 Henry

 On 28 Feb 2014, at 17:38, Mark Washenberger 
 mark.washenber...@markwash.net wrote:




 On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).

 -1

 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.


 Morgan and I talked this suggestion through last night and agreed it's
 probably the best approach, and has the benefit of zero impact on other
 services, which is something we're obviously trying to avoid. I imagine it
 could be as simple as a user_id to domain_id lookup table. All we really
 care about is given a globally unique user ID, which identity backend is
 the user from?

 On the downside, it would likely become bloated with unused ephemeral
 user IDs, so we'll need enough metadata about the mapping to implement a
 purging behavior down the line.


 Is this approach planning on reusing the existing user-id field, then? It
 seems like this creates a migration problem for folks who are currently
 using user-ids that are generated by their identity backends.





 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.

  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.

 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.

 Best,
 -jay

  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  --
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Fri, Feb 28, 2014 at 2:26 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-02-28 at 13:10 -0800, Mark Washenberger wrote:
 
  On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash
  hen...@linux.vnet.ibm.com wrote:
  Hi Mark,
 
 
  So we would not modify any existing IDs, so no migration
  required.
 
 
  Okay, I just want to be painfully clear--we're not proposing changing
  any of the current restrictions on the user-id field. We will not:
- require it to be a uuid
- encode it as binary instead of char
- shrink its size below the current 64 characters

 The first would be required for the real solution. The second and third
 are performance improvements.

  Any of those could require a migration for existing IDs depending on
  how your identity driver functions.

 Personally, I think to fix this issue permanently and properly,
 migrations for database schemas of Glance and Nova would indeed need to
 accompany a proposed patch that restricts the Keystone external user ID
 to only a UUID value.

 I entirely disagree with allowing non-UUID values for the user ID value
 that is exposed outside of Keystone. All other solutions (including the
 proposals to continue using the user_id fields with non-UUID values) are
 just hacks IMO.


I believe we have some agreement here. Other openstack services should be
able to use a strongly typed identifier for users. I just think if we want
to go that route, we probably need to create a new field to act as the
proper user uuid, rather than repurposing the existing field. It sounds
like many existing LDAP deployments would break if we repurpose the
existing field.


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-21 Thread Mark Washenberger
Hi Georgy,

Thanks for all your efforts putting this together.

In the incubation request, one of the proposals is to include Murano under
an expanded scope of the Images program, renaming it the Catalog program.
I've been extremely pleased with the help of you and your colleagues in
helping to define the broader role for Glance as a more general artifact
repository. However, the proposal to bring all of Murano under the expanded
Images program strains my current understanding of how Images needs to
expand in scope.

Prior to this email, I was imagining that we would expand the Images
program to go beyond storing just block device images, and into more
structured items like whole Nova instance templates, Heat templates, and
Murano packages. In this scheme, Glance would know everything there is to
know about a resource--its type, format, location, size, and relationships
to other resources--but it would not know or offer any links for how a
resource is to be used.

For example, Glance would know the virtual size, the storage format, and
all the data associated with a disk image. But it would not necessarily
know anything about a user's ability to either boot that disk image in Nova
or to populate a Cinder volume with the image data.

I think you make a very good point, however. In an orchestrated view of the
cloud, the most usable approach is to have links directly from a resource
to the actions you can perform with the resource. In pseudocode,
image.boot() rather than nova.boot(image). In this more expansive view of
the Catalog, I think it would make sense to include Murano entirely as part
of the Catalog program.

However, this change seems to me to imply a significant architectural shift
for OpenStack in general, and I'm just not quite comfortable endorsing it.
I'm very eager to hear other opinions about this question--perhaps I am
simply not understanding the advantages.

In any case, I hope these notes help to frame the question of where Murano
can best fit.

Thanks again,
markwash


On Thu, Feb 20, 2014 at 10:35 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 All,

 Murano is the OpenStack Application Catalog service which has been
 developing on stackforge almost 11 months. Murano has been presented on HK
 summit on unconference track and now we would like to apply for incubation
 during Juno release.

 As the first step we would like to get feedback from TC on Murano
 readiness from OpenStack processes standpoint as well as open up
 conversation around mission and how it fits OpenStack ecosystem.

 Murano incubation request form is here:
 https://wiki.openstack.org/wiki/Murano/Incubation

 As a part of incubation request we are looking for an advice from TC on
 the governance model for Murano. Murano may potentially fit to the
 expanding scope of Image program, if it will be transformed to Catalog
 program. Also it potentially fits Orchestration program, and as a third
 option there might be a value in creation of a new standalone Application
 Catalog program. We have pros and cons analysis in Murano Incubation
 request form.

 Murano team  has been working on Murano as a community project. All our
 code and bugs/specs are hosted at OpenStack Gerrit and Launchpad
 correspondingly. Unit tests and all pep8/hacking checks are run at
 OpenStack Jenkins and we have integration tests running at our own Jenkins
 server for each patch set. Murano also has all necessary scripts for
 devstack integration. We have been holding weekly IRC meetings for the last
 7 months and discussing architectural questions there and in openstack-dev
 mailing lists as well.

 Murano related information is here:

 Launchpad: https://launchpad.net/murano

 Murano Wiki page: https://wiki.openstack.org/wiki/Murano

 Murano Documentation: https://wiki.openstack.org/wiki/Murano/Documentation

 Murano IRC channel: #murano

 With this we would like to start the process of incubation application
 review.

 Thanks
 Georgy

 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Mark Washenberger
On Wed, Feb 5, 2014 at 8:22 AM, Thierry Carrez thie...@openstack.orgwrote:

 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.

 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?


I don't have any issue defining what I think of as typical extension /
variation seams in the Glance code base. However, I'm still struggling to
understand what all this means for our projects and our ecosystem.
Basically, why do I care? What are the implications of a 0% vs 100%
designation? Are we hoping to improve interoperability, or encourage more
upstream collaboration, or what?

How many deployments do we expect to get the trademark after this core
definition process is completed?



 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] mid-cycle meetup?

2014-02-04 Thread Mark Washenberger
I'd like to attend as well, since it is close for me and some upcoming
Glance efforts might be relevant. But I'm definitely more of a chicken
than a pig for this gathering so let me know if that kind of
participation is not really desired.

[1] http://en.wikipedia.org/wiki/The_Chicken_and_the_Pig


On Tue, Jan 28, 2014 at 9:44 PM, Chris Behrens cbehr...@codestud.comwrote:

 I'd be interested in this.  While I have not provided any contributions to
 Ironic thus far, I'm beginning to look at it for some things.  I am local
 to the bay area, so Sunnyvale is a convenient location for me as well. :)

 - Chris


 On Jan 24, 2014, at 5:30 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 On Fri, Jan 24, 2014 at 2:03 PM, Robert Collins robe...@robertcollins.net
  wrote:

 This was meant to go to -dev, not -operators. Doh.


 -- Forwarded message --
 From: Robert Collins robe...@robertcollins.net
 Date: 24 January 2014 08:47
 Subject: [TripleO] mid-cycle meetup?
 To: openstack-operat...@lists.openstack.org
 openstack-operat...@lists.openstack.org


 Hi, sorry for proposing this at *cough* the mid-way point [christmas
 shutdown got in the way of internal acks...], but who would come if
 there was a mid-cycle meetup? I'm thinking the HP sunnyvale office as
 a venue.

 -Rob



 Hi!

 I'd like to co-locate the Ironic midcycle meetup, as there's a lot of
 overlap between our team's needs and facilitating that collaboration will
 be good. I've added the [Ironic] tag to the subject to pull in folks who
 may be filtering on this project specifically. Please keep us in the loop!

 Sunnyvale is easy for me, so I'll definitely be there.

 Cheers,
 Deva
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Mark Washenberger
On Mon, Feb 3, 2014 at 7:13 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
  IMHO, the bit that should really be optimized is the selection of the
  store nodes where the image should be downloaded from. That is,
  selecting the nearest location from the image locations and this is
  something that perhaps should happen in glance-api, not nova.

 I disagree. The reason is because glance-api does not know where nova
 is. Nova does.

 I continue to think that the best performance gains will come from
 getting rid of glance-api entirely, putting the block-streaming bits
 into a separate Python library, and having Nova and Cinder pull
 image/volume bits directly from backend storage instead of going through
 the glance middleman.


When you say get rid of glance-api, do you mean the glance server project?
or glance-api as opposed to glance-registry? If its the latter, I think
we're basically in agreement. However, there may be a little bit of a
terminology distinction that is important. Here is the plan that is
currently underway:

1) Deprecate the registry deployment (done when v1 is deprecated)
2) v2 glance api talks directly to the underlying database (done)
3) Create a library in the images program that allows OpenStack projects to
share code for reading image data remotely and picking optimal paths for
bulk data transfer (In progress under the glance.store title)
4) v2 exposes locations that clients can directly access (partially done,
continues to need a lot of improvement)
5) v2 still allows downloading images from the glance server as a
compatibility and lowest-common-denominator feature

In 4, some work is complete, and some more is planned, but we still need
some more planning and design to figure out how to support directly
downloading images in a secure and general way.

Cheers,
markwash


 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-30 Thread Mark Washenberger
On Wed, Jan 29, 2014 at 5:03 PM, Zane Bitter zbit...@redhat.com wrote:

 On 29/01/14 19:40, Jay Pipes wrote:

 On Wed, 2014-01-29 at 18:55 -0500, Zane Bitter wrote:

 I've noticed a few code reviews for new Heat resource types -
 particularly Neutron resource types - where folks are struggling to find
 the appropriate way to model the underlying API in Heat. This is a
 really hard problem, and is often non-obvious even to Heat experts, so
 here are a few tips that might help.

 Resources are nouns, they model Things. Ideally Things that have UUIDs.
 The main reason to have a resource is so you can reference its UUID (or
 some attribute) and pass it to another resource or to the user via an
 output.

 If two resources _have_ to be used together, they're really only one
 resource. Don't split them up - especially if the one whose UUID other
 resources depend on is the first to be created but not the only one
 actually required by the resource depending on it.


 Right. The above is precisely why I raised concerns about the image
 import/upload tasks work ongoing in Glance.

 https://wiki.openstack.org/wiki/Glance-tasks-import#
 Initial_Import_Request


 At least the dependencies there would be in the right order:

   ImportTask - Image - Server

 but if you were to model this in Heat, there should just be an Image
 resource that does the importing internally.

 (I'm not touching the question of whether Heat should have a Glance Image
 resource at all, which I'm deeply ambivalent about.)


Maybe I'm just missing the use case, but it seems like modeling anything
Glance-y in Heat doesn't quite make sense. If at all, the dependency would
run the other way (model heat-things in glance, just as we presently model
nova-things in glance). So I think we're in agreement.



 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-30 Thread Mark Washenberger
On Thu, Jan 30, 2014 at 1:54 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mark Washenberger's message of 2014-01-30 12:41:40 -0800:
  On Wed, Jan 29, 2014 at 5:03 PM, Zane Bitter zbit...@redhat.com wrote:
   (I'm not touching the question of whether Heat should have a Glance
 Image
   resource at all, which I'm deeply ambivalent about.)
  
 
  Maybe I'm just missing the use case, but it seems like modeling anything
  Glance-y in Heat doesn't quite make sense. If at all, the dependency
 would
  run the other way (model heat-things in glance, just as we presently
 model
  nova-things in glance). So I think we're in agreement.
 

 I'm pretty sure it is useful to model images in Heat.

 Consider this scenario:


 resources:
   build_done_handle:
 type: AWS::CloudFormation::WaitConditionHandle
   build_done:
 type: AWS::CloudFormation::WaitCondition
 properties:
   handle: {Ref: build_done_handle}
   build_server:
 type: OS::Nova::Server
 properties:
   image: build-server-image
   userdata:
 join [ ,
   - #!/bin/bash\n
   - build_an_image\n
   - cfn-signal -s SUCCESS 
   - {Ref: build_done_handle}
   - \n]
   built_image:
 type: OS::Glance::Image
 depends_on: build_done
 properties:
   fetch_url: join [ , [http://;, {get_attribute: [ build_server,
 fixed_ip ]}, /image_path]]
   actual_server:
 type: OS::Nova::Server
 properties:
   image: {Ref: built_image}


 Anyway, seems rather useful. Maybe I'm reaching.


Perhaps I am confused. It would be good to resolve that.

I think this proposal makes sense but is distinct from modeling the image
directly. Would it be fair to say that above you are modeling an
image-build process, and the image id/url is an output of that process?
Maybe the distinction I'm making is too fine. The difference is that once
an Image exists, you can pretty much just *download* it, you can't really
do dynamic stuff to it like you can with a nova server instance.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw: [Glance] Is the 'killed' state ever set in v2?

2014-01-26 Thread Mark Washenberger
It does not seem very ReSTful--or very usable, for that matter--for a
resource to be permanently modified when you a PUT fails. So I don't think
we need the 'killed' status. It was purposefully left out of v2 images,
which is not just a reskin of v1.

Perhaps there is still a bug where an image is getting stuck in 'saving' or
some other state when a PUT fails?


On Sun, Jan 26, 2014 at 5:10 AM, David Koo kpublicm...@gmail.com wrote:


 Hi Fei,

 Thanks for the confirmation.

  I think you're right. The 'killed' status should be set in method
 upload()
  if there is an upload failure, see
 
 https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244

 I think you meant:


 https://github.com/openstack/glance/blob/master/glance/api/v1/upload_utils.py#L244

 (the safe_kill() call) right?

 --
 Koo


  -- Original --
  From:  David Kookpublicm...@gmail.com;
  Date:  Jan 26, 2014
  To:  OpenStack Development Mailing
  Listopenstack-dev@lists.openstack.org;
  Subject:  [openstack-dev] [Glance] Is the 'killed' state ever set in v2?
 
  Hi All,
 
  While trying to work on a bug I was trying to simulate some image
  download failures and found that apparently the 'killed' state is never
  set using v2 APIs.
 
  If I understand correctly, a file upload goes to
  api.v2.image_data.ImageDataController.upload and goes all the way to
  store.ImageProxy.set_data which proceeds to write to the backend store.
 
  If the backend store raises an exception it is simply propagated all the
  way up. The notifier re-encodes the exceptions (which is the bug I was
  looking at) but doesn't do anything about the image status.
 
  Nowhere does the image status seem to get set to 'killed'.
 
  Before I log a bug I just wanted to confirm with everybody whether or
  not I've missed out on something.
 
  Thanks.
 
  --
  Koo

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Is the 'killed' state ever set in v2?

2014-01-26 Thread Mark Washenberger
On Sun, Jan 26, 2014 at 4:37 PM, David Koo kpublicm...@gmail.com wrote:


  Perhaps there is still a bug where an image is getting stuck in 'saving'
 or
  some other state when a PUT fails?

 Yes, that's precisely the problem.


We should definitely fix that, thanks for pointing it out!



 Of course, one could argue that that if an upload fails the user
 should be able to continue trying until the upload succeeds! But in that
 case the image status should probably be reset to queued rather than
 stay at saving.


That's exactly my argument so I'd like to see it go back to 'queued'.
Nothing except the status has substantially changed during an upload that
fails due to either the client or the underlying store, so it is easy to
just revert the status and leave the image in a state where the user can
reattempt the upload.



 But this makes me a little uneasy because our
 consistency/concurrency handling seems a little weak at the moment (am I
 right?). If we were to have a more complicated state machine then we
 would need much stronger consistency guarantees when there are
 simultaneous uploads in progress (where some fail and some succeed)!


+1 to less complicated state machines :-)

This is part of what the current work on the import task is designed to
accomplish. When you use import, an image effectively has only two states,
'active' and nonexistent.




 Is there any work on this (concurrency/consistency) front? I
 remember seeing some patches related to caching of simultaneous
 downloads of an image file where issues related to concurrent update of
 image metadata were addressed but IIRC it was -1ed because it reduced
 concurrency.


I might be confused now or confused when I did that review, because I
thought it was reducing download concurrency rather than upload
concurrency. Are you talking about https://review.openstack.org/#/c/46479/ ?



 So do we bring back the 'killed' state or should we shoot for a more
 complicated/powerful state machine?


I think we can get by with trying to simplify the state that is involved
and fixing any bugs with our state management. Is there a specific problem
you're seeing with the



 --
 Koo


 On Sun, Jan 26, 2014 at 06:36:36AM -0800, Mark Washenberger wrote:
  It does not seem very ReSTful--or very usable, for that matter--for a
  resource to be permanently modified when you a PUT fails. So I don't
 think
  we need the 'killed' status. It was purposefully left out of v2 images,
  which is not just a reskin of v1.
 
  Perhaps there is still a bug where an image is getting stuck in 'saving'
 or
  some other state when a PUT fails?
 
 
  On Sun, Jan 26, 2014 at 5:10 AM, David Koo kpublicm...@gmail.com
 wrote:
 
  
   Hi Fei,
  
   Thanks for the confirmation.
  
I think you're right. The 'killed' status should be set in method
   upload()
if there is an upload failure, see
   
  
 https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244
  
   I think you meant:
  
  
  
 https://github.com/openstack/glance/blob/master/glance/api/v1/upload_utils.py#L244
  
   (the safe_kill() call) right?
  
   --
   Koo
  
  
-- Original --
From:  David Kookpublicm...@gmail.com;
Date:  Jan 26, 2014
To:  OpenStack Development Mailing
Listopenstack-dev@lists.openstack.org;
Subject:  [openstack-dev] [Glance] Is the 'killed' state ever set in
 v2?
   
Hi All,
   
While trying to work on a bug I was trying to simulate some image
download failures and found that apparently the 'killed' state is
 never
set using v2 APIs.
   
If I understand correctly, a file upload goes to
api.v2.image_data.ImageDataController.upload and goes all the way to
store.ImageProxy.set_data which proceeds to write to the backend
 store.
   
If the backend store raises an exception it is simply propagated all
 the
way up. The notifier re-encodes the exceptions (which is the bug I
 was
looking at) but doesn't do anything about the image status.
   
Nowhere does the image status seem to get set to 'killed'.
   
Before I log a bug I just wanted to confirm with everybody whether or
not I've missed out on something.
   
Thanks.
   
--
Koo
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo

Re: [openstack-dev] Questions regarding image location and glanceclient behaviour ...

2014-01-22 Thread Mark Washenberger
On Wed, Jan 22, 2014 at 1:05 AM, Public Mail kpublicm...@gmail.com wrote:

 Hi All,

 I have two questions ...

 1) Glance v1 APIs can take a --location argument when creating an image
but v2 APIs can't - bug or feature? (Details below)


I'd call that a missing feature. I think we probably need a glance
image-location-add command somewhere in the client. But fair warning, this
is typically a role-restricted operation.



 2) How should glanceclient (v2 commands) handle reserved attributes?
 a) status quo: (Apparently) let the user set them but the server
will return attribute is reserved error.  Pros: No missing
functionality, no damage done.  Cons: Bad usability.
 b) hard-code list of reserved attributes in client and don't expose
them to the user.
 Pros: quick to implement.
 Cons: Need to track reserved attributes in server
 implementation.
 c) get-reserved words from schema downloaded from server (and don't
expose them to the user).
 Pros: Don't need to track server implmentation.
 Cons: Complex - reserved words can vary from command to
 command.

   I personally favor (b) on the grounds that a client implementation
   needs to closely understand server behaviour anyway so the sync-ing
   of reserved attributes shouldn't be a big problem (*provided* the
   list of reserved attributes is made available in the reference
   documentation which doesn't seem to be the case currently).



We are in a bit of a bind with schemas--what's needed is schema resources
to represent each request and response, not just each resource. Because,
obviously, the things you can PATCH and POST are necessarily different than
the things you can GET in any service api. However, it is not clear to me
how we get from one schema per resource to one schema per request and
response in a backwards compatible way. So b) might be the only way to go.




 So what does everybody think?

 details
 When using glance client's v1 interface I can image-create an image and
 specify the image file's location via the --location parameter.
 Alternatively I can image-create an empty image and then image-update the
 image's location to some url.

 However, when using the client's v2 commands I can neither image-create the
 file using the --location parameter, nor image-update the file later.

 When using image-create with --location, the client gives the following
 error (printed by warlock):

   Unable to set 'locations' to '[u'http://192.168.1.111/foo/bar']'

 This is because the schema dictates that the location should be an object
 of the form [{url: string, metadata: object}, ...] but there is no
 way to specify such an object from the command line - I cannot specify a
 string like '{url: 192.168.1.111/foo/bar, metadata: {}}' for there
 is
 no conversion from command line strings to python dicts nor is there any
 conversion from a simple URL string to a suitable location object.

 If I modify glanceclient.v2.images.Controller.create to convert the
 locations parameter from a URL string to the desired object then the
 request goes through to the glance server where it fails with a 403 error
 (Attribute 'locations' is reserved).

 So is this discrepancy between V1  V2 deliberate (a feature :)) or is it a
 bug?
 /details

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Property protections not being enforced?

2014-01-21 Thread Mark Washenberger
On Mon, Jan 20, 2014 at 6:02 AM, Tom Leaman t...@tomleaman.co.uk wrote:

 I'm looking at a possible bug here but I just want to confirm
 that I'm not missing something obvious.

 I'm currently working with Devstack on Ubuntu 12.04 LTS

 Once Devstack is up and running, I'm creating a file
 /etc/glance/property-protections.conf as follows:

 [^foo_property$]
 create = @
 read = @
 update = admin
 delete = admin

 [.*]
 create = @
 read = @
 update = @
 delete = @

 I'm then referencing this in my glance-api.conf and restarting the glance
 api service.

 My understanding is that, as the demo user (which does not have the admin
 role), I should
 be able to set foo_property='some_value' but once set, I should not be
 able to modify or delete it
 which I currently am able to do.

 I have tried changing the various operations to '!' and confirmed that
 those will prevent me from
 executing those operations (returning 403 as expected). I've also double
 checked that the demo user
 has not somehow acquired the admin role.

 Tom


I'm seeing the same behavior. I'll keep digging, but meanwhile would you be
so kind as to file a bug (if you haven't already!) Thanks so much for
pointing this out.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Property protections not being enforced?

2014-01-21 Thread Mark Washenberger
I found the cause. When using role-based protections, instead of stopping
after the first rule that matches, it keeps going. So in your example, the
.* property rule is being applied after the ^foo_property$ rule says no.
I've determined that we can completely avoid the bug in current deployments
by using policies rather than roles for the configuration setting
property_protection_rule_format.

It should be a very easy fix--the challenge seems to be writing a good test
for it. I went ahead and filed the bug (
https://bugs.launchpad.net/glance/+bug/1271426) and will have a go at a fix.

Thanks again for bringing this issue to our attention, Tom!


On Tue, Jan 21, 2014 at 3:37 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:




 On Mon, Jan 20, 2014 at 6:02 AM, Tom Leaman t...@tomleaman.co.uk wrote:

 I'm looking at a possible bug here but I just want to confirm
 that I'm not missing something obvious.

 I'm currently working with Devstack on Ubuntu 12.04 LTS

 Once Devstack is up and running, I'm creating a file
 /etc/glance/property-protections.conf as follows:

 [^foo_property$]
 create = @
 read = @
 update = admin
 delete = admin

 [.*]
 create = @
 read = @
 update = @
 delete = @

 I'm then referencing this in my glance-api.conf and restarting the glance
 api service.

 My understanding is that, as the demo user (which does not have the admin
 role), I should
 be able to set foo_property='some_value' but once set, I should not be
 able to modify or delete it
 which I currently am able to do.

 I have tried changing the various operations to '!' and confirmed that
 those will prevent me from
 executing those operations (returning 403 as expected). I've also double
 checked that the demo user
 has not somehow acquired the admin role.

 Tom


 I'm seeing the same behavior. I'll keep digging, but meanwhile would you
 be so kind as to file a bug (if you haven't already!) Thanks so much for
 pointing this out.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Meetup Schedule Posted!

2014-01-20 Thread Mark Washenberger
Hi folks,

First things first: Happy Martin Luther King Jr. Day!

Our mini summit / meetup for the Icehouse cycle will take place in one
week's time. To ensure we are all ready and know what to expect, I have
started a wiki page tracking the event details and a tentative schedule.
Please have a look if you plan to attend.

https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup

I have taken the liberty of scheduling several of the topics we have
already discussed. Let me know if anything in the existing schedule creates
a conflict for you. There are also presently 4 unclaimed slots in the
schedule. If your topic is not yet scheduled, please tell me the time you
want and I will update accordingly.

EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken with
me, please respond as soon as possible to let me know your plans. We have a
limited number of seats remaining.

Cheers,
markwash


Our only hope today lies in our ability to recapture the revolutionary
spirit and go out into a sometimes hostile world declaring eternal
hostility to poverty, racism, and militarism.

I knew that I could never again raise my voice against the violence of the
oppressed in the ghettos without having first spoken clearly to the
greatest purveyor of violence in the world today, my own government.

 - Martin Luther King, Jr.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Meetup Schedule Posted!

2014-01-20 Thread Mark Washenberger
On Mon, Jan 20, 2014 at 7:44 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Mark,

 Happy Martin Luther King Jr. Day!

 Will Google hangout or skype meeting available for remote participants? I
 know few engineers who will not be able to attend this mini-summit in
 person but they will be happy to join remotely.


We're going to try to do our best. The discussions will definitely be
recorded and published. In addition I've been trying to figure out a way to
broadcast the video live in a way that an international audience can
access. I'm not sure if google hangouts fit that bill, but perhaps the
hangouts on air feature would be a good way to go. Are there some folks
out there who can help me test this out? Or has anyone had good experiences
with some alternative means? I've also been considering justin.tv

If we do manage to get the broadcasting setup, I think remote participants
are going to have to provide their feedback through text-based means (i.e.
etherpad chat or IRC).



 Thanks,
 Georgy


 On Mon, Jan 20, 2014 at 1:22 AM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:

 Hi folks,

 First things first: Happy Martin Luther King Jr. Day!

 Our mini summit / meetup for the Icehouse cycle will take place in one
 week's time. To ensure we are all ready and know what to expect, I have
 started a wiki page tracking the event details and a tentative schedule.
 Please have a look if you plan to attend.

 https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup

 I have taken the liberty of scheduling several of the topics we have
 already discussed. Let me know if anything in the existing schedule creates
 a conflict for you. There are also presently 4 unclaimed slots in the
 schedule. If your topic is not yet scheduled, please tell me the time you
 want and I will update accordingly.

 EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken
 with me, please respond as soon as possible to let me know your plans. We
 have a limited number of seats remaining.

 Cheers,
 markwash
 

 Our only hope today lies in our ability to recapture the revolutionary
 spirit and go out into a sometimes hostile world declaring eternal
 hostility to poverty, racism, and militarism.

 I knew that I could never again raise my voice against the violence of
 the oppressed in the ghettos without having first spoken clearly to the
 greatest purveyor of violence in the world today, my own government.

  - Martin Luther King, Jr.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Thu, Jan 16, 2014 at 8:06 AM, Dean Troyer dtro...@gmail.com wrote:

 On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller 
 jesse.nol...@rackspace.comwrote:

 On Jan 16, 2014, at 9:26 AM, Justin Hammond justin.hamm...@rackspace.com
 wrote:

 I'm not sure if it was said, but which httplib using being used (urllib3
 maybe?). Also I noticed many people were talking about supporting auth
 properly, but are there any intentions to properly support 'noauth'
 (python-neutronclient, for instance, doesn't support it properly as of
 this writing)?

 Can you detail out noauth for me; and I would say the defacto httplib in
 python today is python-requests - urllib3 is also good but I would say from
 a *consumer* standpoint requests offers more in terms of usability /
 extensibility


 requests is built on top of urllib3 so there's that...

 The biggest reaon I favor using Jamie Lennox's new session layer stuff in
 keystoneclient is that it better reflects the requests API instead of it
 being stuffed in after the fact.  And as the one responsible for that
 stuffing, it was pretty blunt and really needs to be cleaned up more than
 Alessio did.

 only a few libs (maybe just glance and swift?) don't use requests at this
 point and I think the resistance there is the chunked transfers they both
 do.


There's a few more items here that are needed for glance to be able to work
with requests (which we really really want).
1) Support for 100-expect-continue is probably going to be required in
glance as well as swift
2) Support for turning off tls/ssl compression (our streams are already
compressed)

I feel like we *must* have somebody here who is able and willing to add
these features to requests, which seems like the right approach.



 I'm really curious what 'noauth' means against APIs that have few, if any,
 calls that operate without a valid token.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Wed, Jan 15, 2014 at 7:53 PM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

 I did notice, however, that neutronclient is
 conspicuously absent from the Work Items in the blueprint's Whiteboard.
 It will surely be added later. We already working on several things in
 parallel and we will add neutronclient soon.


 I would love to see a bit more detail on the structure of the lib(s), the
 blueprint really doesn't discuss the design/organization/intended API of
 the libs.  For example, I would hope the distinction between the various
 layers of a client stack don't get lost, i.e. not mixing the low-level REST
 API bits with the higher-level CLI parsers and decorators.
 Does the long-term goals include a common caching layer?

 Distinction between client layers won't get lost and would only be
 improved. My basic idea is the following:
 1) Transport layer would handle all transport related stuff - HTTP, JSON
 encoding, auth, caching, etc.
 2) Model layer (Resource classes, BaseManager, etc.) will handle data
 representation, validation
 3) API layer will handle all project specific stuff - url mapping, etc.
 (This will be imported to use client in other applications)
 4) Cli level will handle all stuff related to cli mapping - argparse,
 argcomplete, etc.


I'm really excited about this. I think consolidating layers 1 and 4 will be
a huge benefit for deployers and users.

I'm hoping we can structure layers 2 and 3 a bit flexibly to allow for
existing project differences and proper ownership. For example, in Glance
we use jsonschema somewhat so our validation is a bit different. Also, I
consider the definition of resources and url mappings for images to be
something that should be owned by the Images program. I'm confident,
however, that we can figure out how to structure the libraries,
deliverables, and process to reflect that ownership.



 I believe the current effort referenced by the blueprint is focusing on
 moving existing code into the incubator for reuse, to make it easier to
 restructure later. Alexei, do I have that correct?
 You are right. The first thing we do is try to make all clients look/work
 in similar way. After we'll continue our work with improving overall
 structure.




 2014/1/16 Noorul Islam K M noo...@noorul.com

 Doug Hellmann doug.hellm...@dreamhost.com writes:

  Several people have mentioned to me that they are interested in, or
  actively working on, code related to a common client library --
 something
  meant to be reused directly as a basis for creating a common library for
  all of the openstack clients to use. There's a blueprint [1] in oslo,
 and I
  believe the keystone devs and unified CLI teams are probably interested
 in
  ensuring that the resulting API ends up meeting all of our various
  requirements.
 
  If you're interested in this effort, please subscribe to the blueprint
 and
  use that to coordinate efforts so we don't produce more than one common
  library. ;-)
 

 Solum is already using it https://review.openstack.org/#/c/58067/

 I would love to watch this space.

 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Thu, Jan 16, 2014 at 12:03 AM, Flavio Percoco fla...@redhat.com wrote:

 On 15/01/14 21:35 +, Jesse Noller wrote:


 On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

  Several people have mentioned to me that they are interested in, or
 actively working on, code related to a common client library -- something
 meant to be reused directly as a basis for creating a common library for
 all of the openstack clients to use. There's a blueprint [1] in oslo, and I
 believe the keystone devs and unified CLI teams are probably interested in
 ensuring that the resulting API ends up meeting all of our various
 requirements.

 If you're interested in this effort, please subscribe to the blueprint
 and use that to coordinate efforts so we don't produce more than one common
 library. ;-)

 Thanks,
 Doug


 [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2


 *raises hand*

 Me me!

 I’ve been talking to many contributors about the Developer Experience
 stuff I emailed out prior to the holidays and I was starting blueprint
 work, but this is a great pointer. I’m going to have to sync up with Alexei.

 I think solving this for openstack developers and maintainers as the
 blueprint says is a big win in terms of code reuse / maintenance and
 consistent but more so for *end-user developers* consuming openstack clouds.

 Some background - there’s some terminology mismatch but the rough idea is
 the same:

 * A centralized “SDK” (Software Development Kit) would be built
 condensing the common code and logic and operations into a single namespace.

 * This SDK would be able to be used by “downstream” CLIs - essentially
 the CLIs become a specialized front end - and in some cases, only an
 argparse or cliff front-end to the SDK methods located in the (for example)
 openstack.client.api.compute

 * The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived
 clients - all of the openstack.client.api.** classes would accept an Auth
 object to delegate management / mocking of the Auth / service catalog stuff
 to. This means developers building applications (say for example, horizon)
 don’t need to worry about token/expired authentication/etc.

 * Simplify the dependency graph  code for the existing tools to enable
 single binary installs (py2exe, py2app, etc) for end users of the command
 line tools.

 Short version: if a developer wants to consume an openstack cloud; the
 would have a single SDK with minimal dependencies and import from a single
 namespace. An example application might look like:

 from openstack.api import AuthV2
 from openstack.api import ComputeV2

 myauth = AuthV2(…., connect=True)
 compute = ComputeV2(myauth)

 compute.list_flavors()


 I know this is an example but, could we leave the version out of the
 class name? Having something like:

 from openstack.api.v2 import Compute

or

 from openstack.compute.v2 import Instance

 (just made that up)

 for marconi we're using the later.


Just throwing this out there because it seems relevant to client design.

As we've been looking at porting clients to using v2 of the Images API, its
seems more and more to me that including the *server* version in the main
import path is a real obstacle.

IMO any future client libs should write library interfaces based on the
peculiarities of user needs, not based on the vagaries of the server
version. So as a user of this library I would do something like:

  1 from openstack.api import images
  2 client = images.make_me_a_client(auth_url, etcetera) # all version
negotiation is happening here
  3 client.list_images()  # works more or less same no matter who I'm
talking to

Now, there would still likely be hidden implementation code that is
different per server version and which is instantiated in line 2 above, and
maybe that's the library path stuff you are talking about.




  This greatly improves the developer experience both internal to openstack
 and externally. Currently OpenStack has 22+ (counting stackforge) potential
 libraries a developer may need to install to use a full deployment of
 OpenStack:

  * python-keystoneclient (identity)
  * python-glanceclient (image)
  * python-novaclient (compute)
  * python-troveclient (database)
  * python-neutronclient (network)
  * python-ironicclient (bare metal)
  * python-heatclient (orchestration)
  * python-cinderclient (block storage)
  * python-ceilometerclient (telemetry, metrics  billing)
  * python-swiftclient (object storage)
  * python-savannaclient (big data)
  * python-openstackclient (meta client package)
  * python-marconiclient (queueing)
  * python-tuskarclient (tripleo / management)
  * python-melangeclient (dead)
  * python-barbicanclient (secrets)
  * python-solumclient (ALM)
  * python-muranoclient (application catalog)
  * python-manilaclient (shared filesystems)
  * python-libraclient (load balancers)
  * python-climateclient (reservations)
  * python-designateclient 

Re: [openstack-dev] [Glance][All] Pecan migration strategies

2014-01-14 Thread Mark Washenberger
On Fri, Jan 10, 2014 at 4:51 AM, Flavio Percoco fla...@redhat.com wrote:

 Greetings,

 More discussions around the adoption of Pecan.

 I'd like to know what is the feeling of other folks about migrating
 existing APIs to Pecan as opposed to waiting for a new API version as
 an excuse to migrate the API implementation to Pecan?

 We discussed this in one of the sessions at the summit, I'd like to
 get a final consensus on what the desired migration path is for the
 overall community.

 IIRC, Cinder has a working version of the API with Pecan but there's
 not a real motivation to release a new version of it that will use
 the new implementation. Am I right?

 Nova, instead, will start migrating some parts but not all of them and
 it'll happen as part of the API v3. AFAIU.

 Recently a new patch was proposed in glance[0] and it contains a base
 implementation for the existing API v2. I love that patch and the fact
 that Oleh Anufriiev is working on it. What worries me, is that the
 patch re-implements an existing API and I don't think we should just
 swap them.

 Yes, we have tests (unit and functional) and that should be enough to
 make sure the new implementation works as the old one - Should it?
 Should it? - but...

 This most likely has to be evaluated in a per-project basis. But:

- What are the thoughts of other folks on this matter?

 Cheers,
 FF

 [0] https://review.openstack.org/#/c/62911/

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-08 Thread Mark Washenberger
On Mon, Jan 6, 2014 at 3:50 PM, Jon Bernard jbern...@tuxion.com wrote:

 Hello all,

 I would like to propose instance-level snapshots as a feature for
 inclusion in Nova.  An initial draft of the more official proposal is
 here [1], blueprint is here [2].

 In a nutshell, this feature will take the existing create-image
 functionality a few steps further by providing the ability to take
 a snapshot of a running instance that includes all of its attached
 volumes.  A coordinated snapshot of multiple volumes for backup
 purposes.  The snapshot operation should occur while the instance is in
 a paused and quiesced state so that each volume snapshot is both
 consistent within itself and with respect to its sibling snapshots.

 I still have some open questions on a few topics:

 * API changes, two different approaches come to mind:

   1. Nova already has a command `createImage` for creating an image of an
  existing instance.  This command could be extended to take an
  additional parameter `all-volumes` that signals the underlying code
  to capture all attached volumes in addition to the root volume.  The
  semantic here is important, `createImage` is used to create
  a template image stored in Glance for later reuse.  If the primary
  intent of this new feature is for backup only, then it may not be
  wise to overlap the two operations in this way.  On the other hand,
  this approach would introduce the least amount of change to the
  existing API, requiring only modification of an existing command
  instead of the addition of an entirely new one.

   2. If the feature's primary use is for backup purposes, then a new API
  call may be a better approach, and leave `createImage` untouched.
  This new call could be called `createBackup` and take as a parameter
  the name of the instance.  Although it introduces a new member to the
  API reference, it would allow this feature to evolve without
  introducing regressions in any existing calls.  These two calls could
  share code at some point in the future.

 * Existing libvirt support:

 To initially support consistent-across-multiple-volumes snapshots,
 we must be able to ask libvirt for a snapshot of an already paused
 guest.  I don't believe such a call is currently supported, so
 changes to libvirt may be a prerequisite for this feature.

 Any contribution, comments, and pieces of advice are much appreciated.

 [1]: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
 [2]: https://blueprints.launchpad.net/nova/+spec/instance-level-snapshots


Hi Jon,

In your specification in the Snapshot Storage section you say it might be
nice to combine all of the snapshot images into a single OVF file that
contains all volumes attached to the instance at the time of snapshot. I'd
love it if, by the time you get to the point of implementing this storage
part, we have an option available to you in Glance for storing something
akin to an Instance template. An instance template would be an entity
stored in Glance with references to each volume or image that was uploaded
as part of the snapshot. As an example, it could be something like

instance_template: {
   /dev/sda: /v2/images/some-imageid,
   /dev/sdb: some url for a cinder volume-like entity
}

Essentially, this kind of storage would bring the OVF metadata up into
Glance rather than burying it down in an image byte stream where it is
harder to search or access.

This is an idea that has been discussed several times before, generally
favorably, and if we move ahead with instance-level snapshots in Nova I'd
love to move quickly to support it in Glance. Part of the reason for the
delay of this feature was my worry that if Glance jumps out ahead, we'll
end up with some instance template format that Nova doesn't really want, so
this opportunity for collaboration on use cases would be fantastic.

If after a bit more discussion in this thread, folks think these templates
in Glance would be a good idea, we can try to draw up a proposal for how to
implement the first cut of this feature in Glance.

Thanks




 --
 Jon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Glance Mini Summit Details!

2013-12-29 Thread Mark Washenberger
Hi folks,

Late January, we will be having a mini summit focused on Glance and the
Images Program. All OpenStack ATCs and associated technical product folks
are welcome. Here are the details:

Where:
Hilton Washington Dulles Airport
13869 Park Center Road
Herndon, Virginia 20171
(I do not yet know if there is an associated hotel block/discount code)

When:
January 27-28 2014, 8:30 AM - 5:00 PM

What will we talk about:

The agenda is still being formed and needs your input. See the outline of
it at https://etherpad.openstack.org/p/glance-mini-summit-agenda and please
suggest new items, volunteer to lead existing items, or indicate your
interests. I have prefilled the agenda with some things that I know we
might want to talk about but the situation is still very flexible.


I hope to see you there!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Mark Washenberger
On Mon, Dec 23, 2013 at 12:11 AM, Flavio Percoco fla...@redhat.com wrote:

 On 21/12/13 00:41 -0500, Jay Pipes wrote:

 On 12/20/2013 10:42 AM, Flavio Percoco wrote:

 Greetings,

 In the last Glance meeting, it was proposed to pull out glance's
 stores[0] code into its own package. There are a couple of other
 scenarios where using this code is necessary and it could also be
 useful for other consumers outside OpenStack itself.

 That being said, it's not clear where this new library should live in:

1) Oslo: it's the place for common code, incubation, although this
code has been pretty stable in the last release.

2) glance.stores under Image program: As said in #1, the API has
been pretty stable - and it falls perfectly into what Glance's
program covers.


 What about:

 3) Cinder

 Cinder is for block storage. Images are just a bunch of blocks, and all
 the store drivers do is take a chunked stream of input blocks and store
 them to disk/swift/s3/rbd/toaster and stream those blocks back out again.

 So, perhaps the most appropriate place for this is in Cinder-land.


 This is an interesting suggestion.

 I wouldn't mind putting it there, although I still prefer it to be
 under glance for historical reasons and because Glance team knows that
 code.

 How would it work if this lib falls under Block Storage program?

 Should the glance team be added as core contributors of this project?
 or Just some of them interested in contributing / reviewing those
 patches?

 Thanks for the suggestion. I'd like John and Mark to weigh in too.


I think Jay's suggestion makes a lot of sense. I don't know if the Cinder
folks want to take it on, however. I think its going to be easier in a
process sense to just keep it in the Glance/Images program. Oslo doesn't
seem like the right fit to me, just because this already has a clear owner,
and as you said, it doesn't really need an unstable api cleanup phase (I
know you were not proposing it start out in copy-around mode.)




 Cheers,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Mark Washenberger
On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/23/2013 05:42 AM, Thierry Carrez wrote:

 Flavio Percoco wrote:

 On 21/12/13 00:41 -0500, Jay Pipes wrote:

 Cinder is for block storage. Images are just a bunch of blocks, and
 all the store drivers do is take a chunked stream of input blocks and
 store them to disk/swift/s3/rbd/toaster and stream those blocks back
 out again.

 So, perhaps the most appropriate place for this is in Cinder-land.


 This is an interesting suggestion.

 I wouldn't mind putting it there, although I still prefer it to be
 under glance for historical reasons and because Glance team knows that
 code.

 How would it work if this lib falls under Block Storage program?

 Should the glance team be added as core contributors of this project?
 or Just some of them interested in contributing / reviewing those
 patches?

 Thanks for the suggestion. I'd like John and Mark to weigh in too.


 Programs are a team of people on a specific mission. If the stores code
 is maintained by a completely separate group (glance devs), then it
 doesn't belong in the Block Storage program... unless the Cinder devs
 intend to adopt it over the long run (and therefore the contributors of
 the Block Storage program form a happy family rather than two separate
 groups).


 Understood. The reason I offered this up as a suggestion is that currently
 Cinder uses the Glance REST API to store and retrieve volume snapshots, and
 it would be more efficient to just give Cinder the ability to directly
 retrieve the blocks from one of the underlying store drivers (same goes for
 Nova's use of Glance). ...and, since the glance.store drivers are dealing
 with blocks, I thought it made more sense in Cinder.


True, Cinder and Nova should be talking more directly to the underlying
stores--however their direct interface should probably be through
glanceclient. (Glanceclient could evolve to use the glance.store code I
imagine.)



  Depending on the exact nature of the couple of other scenarios where
 using this code is necessary, I think it would either belong in Glance
 or in Oslo.


 Perhaps something in olso then. oslo.blockstream? oslo.blockstore?


 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Please stop +Aing glance changes until your doc job is working

2013-12-11 Thread Mark Washenberger
On Wed, Dec 11, 2013 at 3:05 PM, Sean Dague s...@dague.net wrote:

 Dear Glance core,

 Until this review is sorted - https://review.openstack.org/#/c/60971/2


Or this one https://review.openstack.org/#/c/61600/ rather




 You won't be able to merge any changes, because of the docs issue with
 sphinx.

 http://lists.openstack.org/pipermail/openstack-dev/2013-December/021863.html

 Which means right now every glance patch that goes into the gate will
 100% fail, and will cause 45-60 minute delay to every other project in
 the gate as your change has to fail out of the queue.

 Thanks,

 -Sean


Thanks for the alert.



 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Mark Washenberger
On Thu, Dec 5, 2013 at 9:32 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/05/2013 04:25 PM, Clint Byrum wrote:

 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:

 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:

  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:

 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.


 Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an 
 issue,
 but I admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design,
 however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.


 I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.

 I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
 Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

 If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image? Combining them adds a huge development
 complexity with a very small operations payoff, and so Openstack is already
 so operationally complex that HeatR as a separate service would be
 knowledgeable. Only clients of Heat will ever care about data and
 operations on templates, so I move that HeatR becomes it's own service, or
 becomes part of Heat.


 I spoke at length via G+ with Randall and Tim about this earlier today.
 I think I understand the impetus for all of this a little better now.

 Basically what I'm suggesting is that Glance is only narrow in scope
 because that was the only object that OpenStack needed a catalog for
 before now.

 However, the overlap between a catalog of images and a catalog of
 templates is quite comprehensive. The individual fields that matter to
 images are different than the ones that matter to templates, but that
 is a really minor detail isn't it?

 I would suggest that Glance be slightly expanded in scope to be an
 object catalog. Each object type can have its own set of fields that
 matter to it.

 This doesn't have to be a minor change to glance to still have many
 advantages over writing something from scratch and asking people to
 deploy another service that is 99% the same as Glance.


 My suggestion for long-term architecture would be to use Murano for
 catalog/metadata information (for images/templates/whatever) and move the
 block-streaming drivers into Cinder, and get rid of the Glance project
 entirely. Murano would then become the catalog/registry of objects in the
 OpenStack world, Cinder would be the thing that manages and streams blocks
 of data or block devices, and Glance could go away. Imagine it... OpenStack
 actually *reducing* the number of projects instead of expanding! :)


I think it is good to mention the idea of shrinking the overall OpenStack
code base. The fact that the best code offers a lot of features without a
hugely expanded codebase often seems forgotten--perhaps because it is
somewhat incompatible with our low-barrier-to-entry model of development.

However, as a mild defense of Glance's place in the OpenStack ecosystem,
I'm not sure yet that a general catalog/metadata service would be a proper
replacement. There are two key distinctions between Glance and a
catalog/metadata service. One is that Glance *owns* the reference to the
underlying data--meaning Glance can control the consistency of its
references. I.e. you should not be able to delete the image data out from
underneath Glance while the Image entry exists, in order to avoid a
terrible user experience. Two is that Glance understands and coordinates
the meaning and relationships of Image metadata. Without these
distinctions, I'm not sure we need any OpenStack project at all--we should
probably just publish an LDAP schema for 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Mark Washenberger
On Fri, Dec 6, 2013 at 2:43 PM, Randall Burt randall.b...@rackspace.comwrote:

  I too have warmed to this idea but wonder about the actual implementation
 around it. While I like where Edmund is going with this, I wonder if it
 wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates
 to Glance (/assemblies, /applications, etc) along side /images.  Initially,
 we could have separate endpoints and data structures for these different
 asset types, refactoring the easy bits along the way and leveraging the
 existing data storage and caching bits, but leaving more disruptive changes
 alone. That can get the functionality going, prove some concepts, and allow
 all of the interested parties to better plan a more general v3 api.


I think this trajectory makes a lot of sense as an initial plan. We should
definitely see how much overlap there is through a detailed proposal. If
there are some extremely low-hanging fruit on the side of generalization,
maybe we can revise such a proposal before we get going too far.

It also occurs to me that this is a very big shift in focus for the Glance
team, however, so perhaps it would make sense to try to discuss this at the
midcycle meetup [1]? I know some of the discussion there is going to
revolve around finding a better solution to the image sharing / image
marketplace problem.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019230.html



  On Dec 6, 2013, at 4:23 PM, Edmund Troche edmund.tro...@us.ibm.com
  wrote:

  I agree with what seems to also be the general consensus, that Glance
 can become Heater+Glance (the service that manages images in OS today).
 Clearly, if someone looks at the Glance DB schema, APIs and service type
 (as returned by keystone service-list), all of the terminology is about
 images, so we would need to more formally define what are the
 characteristics or image, template, maybe assembly, components etc
 and find what is a good generalization. When looking at the attributes for
 image (image table), I can see where there are a few that would be
 generic enough to apply to image, template etc, so those could be taken
 to be the base set of attributes, and then based on the type (image,
 template, etc) we could then have attributes that are type-specific (maybe
 by leveraging what is today image_properties).

 As I read through the discussion, the one thing that came to mind is
 asset management. I can see where if someone bothers to create an image,
 or a template, then it is for a good reason, and that perhaps you'd like to
 maintain it as an IT asset. Along those lines, it occurred to me that maybe
 what we need is to make Glance some sort of asset management service that
 can be leveraged by Service Catalogs, Nova, etc. Instead of storing
 images and templates  we store assets of one kind or another, with
 artifacts (like files, image content, etc), and associated metadata. There
 is some work we could borrow from, conceptually at least, from OSLC's Asset
 Management specification:
 http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/.
 Looking at this spec, it probably has more than we need, but there's plenty
 we could borrow from it.


 Edmund Troche


 graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a
 Murano team we will be happy to contribute to Glance. Our Murano metadata
 repository is a stand


 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Date: 12/06/2013 01:34 PM
 Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

 --



 As a Murano team we will be happy to contribute to Glance. Our Murano
 metadata repository is a standalone component (with its own git
 repository)which is not tightly coupled with Murano itself. We can easily
 add our functionality to Glance as a new component\subproject.

 Thanks
 Georgy


 On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya 
 *vishvana...@gmail.com* vishvana...@gmail.com wrote:


On Dec 6, 2013, at 10:38 AM, Clint Byrum 
 *cl...@fewbar.com*cl...@fewbar.com
wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44
-0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 *fewbar.com*http://fewbar.com/

  wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45
-0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can
collate the
 responses I've received below. I think enhancing glance to do
these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
  

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Mark Washenberger
On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya vishvana...@gmail.comwrote:


 On Dec 5, 2013, at 12:42 PM, Andrew Plunk andrew.pl...@rackspace.com
 wrote:

  Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
  wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an issue,
 but I admit to ignorance about those details in Glance.
 
 
  Optimizations can be improved for various use cases. The design,
 however,
  has no assumptions that I know about that would invalidate storing blobs
  of yaml/json vs. blobs of kernel/qcow2/raw image.
 
  I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.
 
  I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
  Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

 This is not completely correct. Glance already supports something akin to
 templates. You can create an image with metadata properties that
 specifies a complex block device mapping which would allow for multiple
 volumes and images to connected to the vm at boot time. This is
 functionally a template for a single vm.

 Glance is pretty useless if is just an image storage service, we already
 have other places that can store bits (swift, cinder). It is much more
 valuable as a searchable repository of bootable templates. I don't see any
 reason why this idea couldn't be extended to include more complex templates
 that could include more than one vm.


FWIW I agree with all of this. I think Glance's real role in OpenStack is
as a helper and optionally as a gatekeeper for the category of stuff Nova
can boot. So any parameter that affects what Nova is going to boot should
in my view be something Glance can be aware of. This list of parameters
*could* grow to include multiple device images, attached volumes, and other
things that currently live in the realm of flavors such as extra hardware
requirements and networking aspects.

Just so things don't go too crazy, I'll add that since Nova is generally
focused on provisioning individual VMs, anything above the level of an
individual VM should be out of scope for Glance.

I think Glance should alter its approach to be less generally agnostic
about the contents of the objects it hosts. Right now, we are just starting
to do this with images, as we slowly advance on offering server side format
conversion. We could find similar use cases for single vm templates.

It would be fantastic if we could figure out how to turn this idea into
some actionable work in late I/early J. It could be a fun thing to work on
at the midcycle meetup.



 We have discussed the future of glance a number of times, and if it is
 really just there to serve up blobs of data + metadata about images, it
 should go away. Glance should offer something more like the AWS image
 search console. And this could clearly support more than just images, you
 should be able to search for and launch more complicated templates as well.

 
  If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image?


 I don't see these as significantly different types of metadata. Metadata
 for heat templates might be a bit more broad (minFlavor?) I would think
 that a template would care about constraints like this, especially when you
 consider that a user might want to give a command to launch a template but
 then override certain characteristics.

 Vish

  Combining them adds a huge development complexity with a very small
 operations payoff, and so Openstack is already so operationally complex
 that HeatR as a separate service would be knowledgeable. Only clients of
 Heat will ever care about data and operations on templates, so I 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Mark Washenberger
On Thu, Dec 5, 2013 at 3:11 PM, Randall Burt randall.b...@rackspace.comwrote:

  On Dec 5, 2013, at 4:45 PM, Steve Baker sba...@redhat.com
  wrote:

  On 12/06/2013 10:46 AM, Mark Washenberger wrote:




 On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya 
 vishvana...@gmail.comwrote:


 On Dec 5, 2013, at 12:42 PM, Andrew Plunk andrew.pl...@rackspace.com
 wrote:

  Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
  wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an issue,
 but I admit to ignorance about those details in Glance.
 
 
  Optimizations can be improved for various use cases. The design,
 however,
  has no assumptions that I know about that would invalidate storing
 blobs
  of yaml/json vs. blobs of kernel/qcow2/raw image.
 
  I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.
 
  I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
  Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

  This is not completely correct. Glance already supports something akin
 to templates. You can create an image with metadata properties that
 specifies a complex block device mapping which would allow for multiple
 volumes and images to connected to the vm at boot time. This is
 functionally a template for a single vm.

 Glance is pretty useless if is just an image storage service, we
 already have other places that can store bits (swift, cinder). It is much
 more valuable as a searchable repository of bootable templates. I don't see
 any reason why this idea couldn't be extended to include more complex
 templates that could include more than one vm.


  FWIW I agree with all of this. I think Glance's real role in OpenStack
 is as a helper and optionally as a gatekeeper for the category of stuff
 Nova can boot. So any parameter that affects what Nova is going to boot
 should in my view be something Glance can be aware of. This list of
 parameters *could* grow to include multiple device images, attached
 volumes, and other things that currently live in the realm of flavors such
 as extra hardware requirements and networking aspects.

  Just so things don't go too crazy, I'll add that since Nova is generally
 focused on provisioning individual VMs, anything above the level of an
 individual VM should be out of scope for Glance.

  I think Glance should alter its approach to be less generally agnostic
 about the contents of the objects it hosts. Right now, we are just starting
 to do this with images, as we slowly advance on offering server side format
 conversion. We could find similar use cases for single vm templates.


 The average heat template would provision more than one VM, plus any
 number of other cloud resources.

 An image is required to provision a single nova server;
 a template is required to provision a single heat stack.

 Hopefully the above single vm policy could be reworded to be agnostic to
 the service which consumes the object that glance is storing.


  To add to this, is it that Glance wants to be *more* integrated and
 geared towards vm or container images or that Glance wishes to have more
 intimate knowledge of the things its cataloging *regardless of what those
 things actually might be*? The reason I ask is that Glance supporting only
 single vm templates when Heat orchestrates the entire (or almost entire)
 spectrum of core and integrated projects means that its suitability as a
 candidate for a template repository plummets quite a bit.


Yes, I missed the boat a little bit there. I agree Glance could operate as
a repo for these kinds of templates. I don't know about expanding much
further beyond the Nova / Heat stack. But within that stack, I think the
use cases are largely the same.

It seems like heat templates likely have built-in relationships with vm
templates / images that would be really nice track closely in the Glance
data model--for example if you wanted something like a notification when
deleting

Re: [openstack-dev] [Glance] Interested in a mid-Icehouse Glance mini-summit?

2013-11-26 Thread Mark Washenberger
On Mon, Nov 25, 2013 at 10:47 PM, Boris Pavlovic bpavlo...@mirantis.comwrote:

 Mark,


 Why we are not able to combine this and Nova meetup in the same place in
 the same time?


The two events are being planned and sponsored separately. Maybe in the
future it will make sense to merge them to ease travel difficulties for
some. But there are likely some tradeoffs to consider.

Thanks for your interest



 Best regards,
 Boris Pavlovic


 On Wed, Nov 20, 2013 at 11:23 PM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:




 On Thu, Nov 14, 2013 at 1:35 PM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:

 Hi folks,

 There's been some talk about copying Nova for a mid-cycle meetup to
 reorganize around our development priorities in Glance.

 So far, the plan is still tentative, but we're focusing on late January
 and the Washington, DC area. If you're interested and think you may be able
 to attend, please fill out this survey.


 https://docs.google.com/forms/d/11DjkNAzVAdtMCPrsLiyjA7ck33jnexmKqlqaCl5olO8/viewform

 3,
 markwash



 As a reminder, please fill out the form above if you are interested in a
 Glance mid-cycle meetup. Depending on interest, this meetup could serve a
 lot of purposes. For one, it will be an opportunity for Glance developers
 to meet face to face and hammer out the details for finishing the Icehouse
 release. For another, it can be a good opportunity to discover and plan
 longer term features for the product. Finally, we may also have the chance
 for developers to spend time together hacking or even learn about a few new
 techniques that may be relevant to future development. But it need not be
 restricted to current Glance developers--indeed some representation from
 other projects would be appreciated to help us improve how we serve the
 overall suite of projects.

 Thanks for your interest!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Meeting cancelled this week

2013-11-26 Thread Mark Washenberger
Hi folks,

Our normally scheduled meeting this week will not be held, since it would
coincide with a major U.S. holiday and I'll be busy at that time stuffing
my face with turkey and a wide variety of starches.

We'll pick up where we left off next week during our normal timeslot (which
is at 1400 UTC).

Happy what-have-you!
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-25 Thread Mark Washenberger
On Fri, Nov 22, 2013 at 6:28 PM, Monty Taylor mord...@inaugust.com wrote:



 On 11/22/2013 06:55 PM, Mark Washenberger wrote:
 
 
 
  On Fri, Nov 22, 2013 at 1:13 PM, Robert Collins
  robe...@robertcollins.net mailto:robe...@robertcollins.net wrote:
 
  On 22 November 2013 22:31, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
   Robert Collins wrote:
   I don't understand why branches would be needed here *if* the
  breaking
   changes don't impact any supported release of OpenStack.
  
   Right -- the trick is what does supported mean in that case.
  
   When the client libraries were first established as separate
   deliverables, they came up with a blanket statement that the latest
   version could always be used with *any* version of openstack you
 may
   have. The idea being, if some public cloud was still stuck in
  pre-diablo
   times, you could still use the same library to address both this
  public
   cloud and the one which was 2 weeks behind Havana HEAD.
 
  Huh. There are two different directions we use the client in.
 
  Client install - cloud API (of arbitrary version A)
 
  Server install (of arbitrary version B) using the Python library -
  cloud API (version B)
 
  From a gating perspective I think we want to check
  that:
   - we can use the client against some set of cloud versions A
   - that some set of version B where servers running cloud version B
  can use the client against cloud version B
 
  But today we don't test against ancient versions of A or B.
 
  If we were to add tests for such scenarios, I'd strongly argue that
 we
  only add then for case A. Where we are using the client lib in an
  installed cloud, we don't need to test that it can still be used
  against pre-diablo etc: such installed clouds can keep using the old
  client lib.
 
 
  I'm afraid that if a critical bug is found in the old client lib, the
  current path for fixing it is to ask people to update to the latest
  client version, even internally to their old cloud. So would that cause
  a branch for backporting fixes?

 The plan is that the current client libs should always be installable.
 So we would not (and never have) make a branch for backporting fixes.


Yes. I think wires are a bit crossed here, but you and I agree. It seemed
to me that Robert was suggesting that old clouds can internally keep using
old versions of client libs. Which seems wrong since we don't do backports,
so old clouds using old libs would never get security updates.



  FWIW, I don't think the changes glanceclient needs in v1 will break the
  'B' case above. But it does raise a question--if they did, would it be
  sufficient to backport a change to adapt old supported stable B versions
  of, say, Nova, to work with the v1 client? Honestly asking, a big ol' NO
  is okay.

 I'm not sure I follow all the pronouns. Could you re-state this again, I
 think I know what you're asking, but I'd like to be sure.


Sorry for being so vague. I'll try to be specific.

Branch nova/stable/folsom depends on python-glanceclient/master. Suppose we
find that nova/stable/folsom testing is broken when we stage (hopefully
before merging) the breaking changes that are part of the
python-glanceclient v1.0.0 release. Would it be acceptable in this case to
have a compatibility patch to nova/stable/folsom? Or will the only option
be to modify the python-glanceclient patch to maintain compatibility?


Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-22 Thread Mark Washenberger
On Fri, Nov 22, 2013 at 1:13 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 22 November 2013 22:31, Thierry Carrez thie...@openstack.org wrote:
  Robert Collins wrote:
  I don't understand why branches would be needed here *if* the breaking
  changes don't impact any supported release of OpenStack.
 
  Right -- the trick is what does supported mean in that case.
 
  When the client libraries were first established as separate
  deliverables, they came up with a blanket statement that the latest
  version could always be used with *any* version of openstack you may
  have. The idea being, if some public cloud was still stuck in pre-diablo
  times, you could still use the same library to address both this public
  cloud and the one which was 2 weeks behind Havana HEAD.

 Huh. There are two different directions we use the client in.

 Client install - cloud API (of arbitrary version A)

 Server install (of arbitrary version B) using the Python library -
 cloud API (version B)

 From a gating perspective I think we want to check
 that:
  - we can use the client against some set of cloud versions A
  - that some set of version B where servers running cloud version B
 can use the client against cloud version B

 But today we don't test against ancient versions of A or B.

 If we were to add tests for such scenarios, I'd strongly argue that we
 only add then for case A. Where we are using the client lib in an
 installed cloud, we don't need to test that it can still be used
 against pre-diablo etc: such installed clouds can keep using the old
 client lib.


I'm afraid that if a critical bug is found in the old client lib, the
current path for fixing it is to ask people to update to the latest client
version, even internally to their old cloud. So would that cause a branch
for backporting fixes?

FWIW, I don't think the changes glanceclient needs in v1 will break the 'B'
case above. But it does raise a question--if they did, would it be
sufficient to backport a change to adapt old supported stable B versions
of, say, Nova, to work with the v1 client? Honestly asking, a big ol' NO is
okay.



 So assuming you agree with that assertion, where do we need a branch here?

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-22 Thread Mark Washenberger
On Fri, Nov 22, 2013 at 12:03 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-11-22 10:34:40 +0100 (+0100), Thierry Carrez wrote:
  It can be created on request by release management members (or
  infra-core team). I /think/ that by default it would get tested against
  master in other branches.

 More details at...

 URL: https://wiki.openstack.org/wiki/GerritJenkinsGithub#Merge_Commits 


Cool. Is this documentation essentially explaining how to keep a feature
branch up to date with master? (spoiler warning: carefully use merge
commits?)



 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-21 Thread Mark Washenberger
On Thu, Nov 21, 2013 at 1:58 AM, Thierry Carrez thie...@openstack.orgwrote:

 Mark Washenberger wrote:
  [...]
  In order to mitigate that risk, I think it would make a lot of sense to
  have a place to stage and carefully consider all the breaking changes we
  want to make. I also would like to have that place be somewhere in
  Gerrit so that it fits in with our current submission and review
  process. But if that place is the 'master' branch and we take a long
  time, then we can't really release any bug fixes to the v0 series in the
  meantime.
 
  I can think of a few workarounds, but they all seem kinda bad. For
  example, we could put all the breaking changes together in one commit,
  or we could do all this prep in github.
 
  My question is, is there a correct way to stage breaking changes in
  Gerrit? Has some other team already dealt with this problem?
  [...]

 It sounds like a case where we could use a feature branch. There have
 been a number of them in the past when people wanted to incrementally
 work on new features without affecting master, and at first glance
 (haha) it sounds appropriate here.


As a quick sidebar, what does a feature branch entail in today's parlance?


 Infra team, thoughts ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to stage client major releases in Gerrit?

2013-11-20 Thread Mark Washenberger
Hi folks,

The project python-glanceclient is getting close to needing a major release
in order to finally remove some long-deprecated features, and to make some
minor adjustments that are technically backwards-incompatible.

Normally, our release process works great. When we cut a release (say
1.0.0), if we realize it doesn't contain a feature we need, we can just add
the feature and release a new minor version (say 1.1.0). However, when it
comes to cutting out the fat for a major release, if we find a feature that
we failed to remove before releasing 1.0.0, we're basically screwed. We
have to keep that feature around until we feel like releasing 2.0.0.

In order to mitigate that risk, I think it would make a lot of sense to
have a place to stage and carefully consider all the breaking changes we
want to make. I also would like to have that place be somewhere in Gerrit
so that it fits in with our current submission and review process. But if
that place is the 'master' branch and we take a long time, then we can't
really release any bug fixes to the v0 series in the meantime.

I can think of a few workarounds, but they all seem kinda bad. For example,
we could put all the breaking changes together in one commit, or we could
do all this prep in github.

My question is, is there a correct way to stage breaking changes in Gerrit?
Has some other team already dealt with this problem?

DISCLAIMER:
For the purposes of this discussion, it will be utterly unproductive to
discuss the relative merits of backwards-breaking changes. Rather let's
assume that all breaking changes that would eventually land in the next
major release are necessary and have been properly communicated well in
advance. If a given breaking change is *not* proper, well that's the kind
of thing I want to catch in gerrit reviews in the staging area!

Respectully,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Interested in a mid-Icehouse Glance mini-summit?

2013-11-20 Thread Mark Washenberger
On Thu, Nov 14, 2013 at 1:35 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 Hi folks,

 There's been some talk about copying Nova for a mid-cycle meetup to
 reorganize around our development priorities in Glance.

 So far, the plan is still tentative, but we're focusing on late January
 and the Washington, DC area. If you're interested and think you may be able
 to attend, please fill out this survey.


 https://docs.google.com/forms/d/11DjkNAzVAdtMCPrsLiyjA7ck33jnexmKqlqaCl5olO8/viewform

 3,
 markwash



As a reminder, please fill out the form above if you are interested in a
Glance mid-cycle meetup. Depending on interest, this meetup could serve a
lot of purposes. For one, it will be an opportunity for Glance developers
to meet face to face and hammer out the details for finishing the Icehouse
release. For another, it can be a good opportunity to discover and plan
longer term features for the product. Finally, we may also have the chance
for developers to spend time together hacking or even learn about a few new
techniques that may be relevant to future development. But it need not be
restricted to current Glance developers--indeed some representation from
other projects would be appreciated to help us improve how we serve the
overall suite of projects.

Thanks for your interest!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-20 Thread Mark Washenberger
On Wed, Nov 20, 2013 at 3:17 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mark Washenberger's message of 2013-11-20 10:14:42 -0800:
  Hi folks,
 
  The project python-glanceclient is getting close to needing a major
 release
  in order to finally remove some long-deprecated features, and to make
 some
  minor adjustments that are technically backwards-incompatible.
 
  Normally, our release process works great. When we cut a release (say
  1.0.0), if we realize it doesn't contain a feature we need, we can just
 add
  the feature and release a new minor version (say 1.1.0). However, when it
  comes to cutting out the fat for a major release, if we find a feature
 that
  we failed to remove before releasing 1.0.0, we're basically screwed. We
  have to keep that feature around until we feel like releasing 2.0.0.
 
  In order to mitigate that risk, I think it would make a lot of sense to
  have a place to stage and carefully consider all the breaking changes we
  want to make. I also would like to have that place be somewhere in Gerrit
  so that it fits in with our current submission and review process. But if
  that place is the 'master' branch and we take a long time, then we can't
  really release any bug fixes to the v0 series in the meantime.
 
  I can think of a few workarounds, but they all seem kinda bad. For
 example,
  we could put all the breaking changes together in one commit, or we could
  do all this prep in github.
 
  My question is, is there a correct way to stage breaking changes in
 Gerrit?
  Has some other team already dealt with this problem?
 
  DISCLAIMER:
  For the purposes of this discussion, it will be utterly unproductive to
  discuss the relative merits of backwards-breaking changes. Rather let's
  assume that all breaking changes that would eventually land in the next
  major release are necessary and have been properly communicated well in
  advance. If a given breaking change is *not* proper, well that's the kind
  of thing I want to catch in gerrit reviews in the staging area!

 I understand what you're trying to do with this disclaimer. The message
 above just _screams_ for this discussion, so why not cut it off at the
 pass? However, glanceclient being a library, not discussing the fact
 that you're breaking an established API is like not discussing ice at
 the north pole.


It is both a CLI and a library. I think different considerations might be
more relevant for those different areas. But this point leads to a much
larger discussion of what standards we should enforce for major revisions,
which we should probably defer for just a moment. I hope the outcome of
discussing my main point will be a space where we can flesh out such
standards with real world examples.



 If you want to be able to change interfaces without sending a missile
 up the tail pipe of every project who depends on your code, call it
 glanceclient2. That solves all of your stated problems from above. You can
 still deprecate glanceclient and stop maintaining it after some overlap
 time. And if users hate glanceclient2, then they can keep glanceclient
 alive with all of its warts.


While I think glanceclient2 is an interesting suggestion for really large,
sweeping changes, I don't think we are anywhere near that point.

Overall, I think you are unintentionally mischaracterizing the nature of
the breaking changes that are being considered--making them seem several
orders of magnitude greater and more disruptive than they actually are. For
folks who believe that under very well-defined and conservative
circumstances it is okay to make a breaking change in a major release, this
mischaracterization is going to be really confusing.

I guess if we still want to talk about when, if ever, to release a major
version of a client, maybe we could take it to another thread? The proposal
to disallow major revisions of OpenStack clients under all normal
circumstances seems like great TC fodder in any case, especially now that
I'm only a spectator.

3,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Summit Session Summaries

2013-11-15 Thread Mark Washenberger
Hi folks,

My summary notes from the OpenStack Design Summit Glance sessions follow.
Enjoy, and please help correct any misunderstandings.



Image State Consistency:


https://etherpad.openstack.org/p/icehouse-summit-image-state-consistency

In this session, we focused on the problem of snapshots that fail
after the image is created but before the image data is uploaded
result in a pending image that will never become active, and the
only operation nova can do is to delete the image. Thus there is
not a very good way to communicate the failure to users without
just leaving a useless image record around.

A solution was proposed to allow Nova to directly set the status
of the image, say to killed or some other state.

A problem with the proposed solution is that we generally have
kept the status field internally controlled by glance, which
means there are some modeling and authorization concerns.
However, it is actually something Nova could do today through
the hacky mechanism of initiating a PUT with data, but then
terminating the connection without sending a complete body. So
the authorization aspects are not really a fundamental concern.

It was suggested that the solution to this problem
is to make Nova responsible for reporting these failures rather
than Glance. In the short term, we could do the following
 - have nova delete the image when snapshot fails (already merged)
 - merge nova patch to report the failure as part of instance
   error reporting

In the longer term, it was seen as desirable for nova to treat
snapshots as asynchronous tasks and reflect those tasks in the
api, including the failure/success of those tasks.

Another long term option that was viewed mostly favorably was
to add another asynchronous task to glance for vanilla uploads
so that nova snapshots can avoid creating the image until it
is fully active.

Fei Long Wang is going to follow up on what approach makes the
most sense for Nova and report back for our next steps.



What to do about v1?


https://etherpad.openstack.org/p/icehouse-summit-images-v1-api

In this discussion, we hammered out the details for how to drop
the v1 api and in what timetable.

Leaning heavily on cinder's experience dropping v1, we came
up with the following schedule.

Icehouse:
- Announce plan to deprecate the V1 API and registry in J and remove it
in K
- Announce feature freeze for v1 API immediately
- Make sure everything in OpenStack is using v2 (cinder, nova, ?)
- Ensure v2 is being fully covered in tempest tests
- Ensure there are no gaps in the migration strategy from v1 to v2
- after the fact, it seems to me we need to produce a migration
guide as a way to evaluate the presence of such gaps
- Make v2 the default in glanceclient
- Turn v2 on by default in glance API

J:
- Mark v1 as deprecated
- Turn v1 off by default in config

K:
- Delete v1 api and v1 registry


A few gotchas were identified, in particular, a concern was raised
about breaking stable branch testing when we switch the default in
glanceclient to v2--since latest glanceclient will be used to test
glance  in say Folsom or Grizzly where the v2 api didn't really
work at all.

In addition, it was suggested that we should be very aggressive
in using deprecation warnings for config options to communicate
this change as loudly as possible.




Image Sharing
-

https://etherpad.openstack.org/p/icehouse-summit-enhance-v2-image-sharing

This session focused on the gaps between the current image sharing
functionality and what is needed to establish an image marketplace.

One issue was the lack of verification of project ids when sharing an image.

A few other issues were identified:
- there is no way to share an image with a large number of projects in a
single api operation
- membership lists are not currently paged
- there is no way to share an image with everyone, you must know each other
project id

We identified a potential issue with bulk operations and
verification--namely there is no way to do bulk verification of project ids
in keystone that we know of, so probably keystone work would be needed to
have both of these features in place without implying super slow api calls.

In addition, we spent some time toying with the idea of image catalogs. If
publishers put images in catalogs, rather than having shared images show up
directly in other users' image lists, things would be a lot safer and we
could relax some of our restrictions. However, there are some issues with
this approach as well,
- How do you find the catalog of a trusted image publisher?
- Are we just pushing the issue of sensible world-listings to another
resource?
- This would be a big change.



Enhancing Image Locations:
--

https://etherpad.openstack.org/p/icehouse-summit-enhance-image-location-property

This session proposed adding several attributes to image locations

1. Add 'status' to 

Re: [openstack-dev] Glance Tasks

2013-11-14 Thread Mark Washenberger
Responses to both Jay and George inline.


On Wed, Nov 13, 2013 at 5:08 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry for top-posting, but in summary, I entirely agree with George here.
 His logic is virtually identical to the concerns I raised with the initial
 proposal for Glance Tasks here:

 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009400.html
 and
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009527.html


In my understanding, your viewpoints are subtly different.

George seems to agree with representing ongoing asynchronous tasks through
a separate 'tasks' resource. I believe where he differs with the current
design is how those tasks are created. He seems to prefer creating tasks
with POST requests to the affected resources. To distinguish between
uploading an image and importing an image, he suggests we require a
different content type in the request.

However, your main point in the links above seemed to be to reuse POST
/v2/images, but to capture the asynchronous nature of image verification
and conversion by adding more nodes to the image state machine.



 Best,
 -jay


 On 11/13/2013 05:36 PM, George Reese wrote:

 Let’s preface this with Glance being the part of OpenStack I am least
 familiar with. Keep in mind my commentary is related to the idea that
 the asynchronous tasks as designed are being considered beyond Glance.
 The problems of image upload/import/cloning/export are unlike other
 OpenStack operations for the most part in that they involve binary data
 as the core piece of the payload.

 Having said that, I’d prefer a polymorphic POST to the tasks API as
 designed.


Thanks. I think we'll move forward with this design for now in Glance. But
your alternative below is compelling and we'll definitely consider as we
add future tasks. I also want to say that we could probably completely
adopt your proposal in the future as long as we also support backwards
compatibility with the current design, but I can't predict at this point
the practical concerns that will emerge.


 But I’m much more concerned with the application of the tasks
 API as designed to wider problems.


I think this concern is very reasonable. Other projects should evaluate
your proposal carefully.



 Basically, I’d stick with POST /images.

 The content type should indicate what the server should expect.
 Basically, the content can be:

 * An actual image to upload
 * Content describing a target for an import
 * Content describing a target for a clone operation

 Implementation needs dictate whether any given operation is synchronous
 or asynchronous. Practically speaking, upload would be synchronous with
 the other two being asynchronous. This would NOT impact an existing
 /images POST as it will not change (unless we suddenly made it
 asynchronous).

 The response would be CREATED (synchronous) or ACCEPTED (asynchronous).
 If ACCEPTED, the body would contain JSON/XML describing the asynchronous
 task.

 I’m not sure if export is supposed to export to a target object store or
 export to another OpenStack environment. But it would be an async
 operation either way and should work as described above. Whether the
 endpoint for the image to be exported is the target or just /images is
 something worthy of discussion based on what the actual function of the
 export is.

 -George

 On Nov 12, 2013, at 5:45 PM, John Bresnahan j...@bresnahan.me
 mailto:j...@bresnahan.me wrote:

  George,

 Thanks for the comments, they make a lot of sense.  There is a Glance
 team meeting on Thursday where we would like to push a bit further on
 this.  Would you mind sending in a few more details? Perhaps a sample
 of what your ideal layout would be?  As an example, how would you
 prefer actions are handled that do not effect a currently existing
 resource but ultimately create a new resource (for example the import
 action).

 Thanks!

 John


 On 11/11/13, 8:05 PM, George Reese wrote:

 I was asked at the OpenStack Summit to look at the Glance Tasks,
 particularly as a general pattern for other asynchronous operations.

 If I understand Glance Tasks appropriately, different asynchronous
 operations get replaced by a single general purpose API call?

 In general, a unified API for task tracking across all kinds of
 asynchronous operations is a good thing. However, assuming this
 understanding is correct, I have two comments:

 #1 A consumer of an API should not need to know a priori whether a
 given operation is “asynchronous”. The asynchronous nature of the
 operation should be determined through a response. Specifically, if
 the client gets a 202 response, then it should recognize that the
 action is asynchronous and expect a task in the response. If it gets
 something else, then the action is synchronous. This approach has the
 virtual of being proper HTTP and allowing the needs of the
 implementation to dictate the synchronous/asynchronous nature of the
 API call and not a fixed contract.

 #2 I 

Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Mark Washenberger
On Thu, Nov 14, 2013 at 5:19 AM, Thierry Carrez thie...@openstack.orgwrote:

 Thierry Carrez wrote:
  [...]
  That will not solve all issues. We should also collectively make sure
  that *usage questions are re-routed* to the openstack general
  mailing-list, where they belong. Too many people still answer off-topic
  questions here on openstack-dev, which encourages people to be off-topic
  in the future (traffic on the openstack general ML has been mostly
  stable, with only 868 posts in October). With those actions, I hope that
  traffic on openstack-dev would drop back to the 1000-1500 range, which
  would be more manageable for everyone.

 Other suggestion: we could stop posting meeting reminders to -dev (I
 know, I'm guilty of it) and only post something if the meeting time
 changes, or if the weekly meeting is canceled for whatever reason.


It seems excessive, I agree. But if your meeting time bounces on a biweekly
schedule to accommodate multiple timezones, I think its quite necessary.



 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Interested in a mid-Icehouse Glance mini-summit?

2013-11-14 Thread Mark Washenberger
Hi folks,

There's been some talk about copying Nova for a mid-cycle meetup to
reorganize around our development priorities in Glance.

So far, the plan is still tentative, but we're focusing on late January and
the Washington, DC area. If you're interested and think you may be able to
attend, please fill out this survey.

https://docs.google.com/forms/d/11DjkNAzVAdtMCPrsLiyjA7ck33jnexmKqlqaCl5olO8/viewform

3,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Mark Washenberger
On Wed, Nov 13, 2013 at 8:02 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Nov 13 2013, John Griffith wrote:

  Trivial or not, people use it and frankly I don't see any value at all
  in removing it.  As far as the some projects want a different format
  of UUID that doesn't make a lot of sense to me but if that's what
  somebody wants they should write their own method.  I strongly agree
  with others with respect to the comments around code-churn.  I see
  little value in this.

 The thing is that code in oslo-incubator is supposed to be graduated to
 standalone Python library.

 We see little value in a library providing a library for a helper doing
 str(uuid.uuid4()).


For the currently remaining function in uuidutils, is_uuid_like, could we
potentially just add this functionality to the standard library?
Something like:

 uuid.UUID('----')
UUID('----')
 uuid.UUID('----'.replace('-', ''))
UUID('----')
 uuid.UUID('----'.replace('-', ''),
strict=True)
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/uuid.py,
line 134, in __init__
raise ValueError('badly formed hexadecimal UUID string')
ValueError: badly formed hexadecimal UUID string

I've had a few situations where UUID's liberal treatment of what it
consumes has seemed a bit excessive, anyway. Not sure if this approach is a
bit too naive, however.




 --
 Julien Danjou
 /* Free Software hacker * independent consultant
http://julien.danjou.info */

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity backend databases

2013-11-13 Thread Mark Washenberger
Resurrecting this thread. . .


I think I'm misunderstanding where we landed on this issue. On the one
hand, it seems like there are tests to assert that uniqueness of names is
case-sensitive. On the other, some folks have identified reasons why they
would want case-insensitivity on uniqueness checks for creating new users.
Still others I think have wisely pointed out that we should probably get
out of the business of creating users.

Trying to incorporate all of these perspectives, I propose the following:

1) We add a configuration option to just the keystone sql identity driver
to force case-sensitivity on uniqueness checks. I'm pretty sure there is a
way to do this in sqlalchemy, basically whatever is equivalent to 'SELECT *
FROM user WHERE BINARY name = %s'. This config option would only affect
create_user and update_user.
2) We always force case-sensitive comparison for get_user_by_name, using a
similar mechanism as above.

By focusing on changes to queries we needn't bother with a migration and
can make the behavior a deployer choice.

Is this a bad goal or approach?

IANADBA,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Meeting Reminder Thursday at 2000 UTC

2013-11-12 Thread Mark Washenberger
Hi folks,

We'll have a Glance team meeting this Thursday at 2000 UTC (don't forget
that UTC applies to both the time and the date!). In your timezone that is
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131114T20ah=1.

As usual the meeting room is #openstack-meeting-alt on freenode.

The agenda can be found at
https://etherpad.openstack.org/p/glance-team-meeting-agenda so please feel
free to suggest items.

This week, I hope we can spend a good chunk of time figuring out how to
improve our review responsiveness.

Thanks, see you there.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HK Summit - Image Creation and Customization unconference?

2013-10-31 Thread Mark Washenberger
On Thu, Oct 31, 2013 at 12:20 PM, Ian McLeod imcl...@redhat.com wrote:

 On Thu, 2013-10-31 at 11:56 -0700, Clint Byrum wrote:
  Excerpts from Ian McLeod's message of 2013-10-31 11:27:39 -0700:
   Hello,
  
   Would any of you attending the summit be interested in snagging an
   unconference session to discuss the state of play with image creation,
   customization and import?
  
   I can contribute an overview and demonstration of our Nova-native image
   building tool.
  
   I'd be interested in exploring integration points with Disk Image
   Builder and Glance.
  
   Any takers?


Sounds like a good opportunity!


 
  I am certain that one of the tripleo-core devs will want to be there,
  if not many of us. We don't have that many sessions scheduled so I'd
  suggest just trying to aim at when we're not having Deployment sessions.

 Those are exclusively on Tuesday, yes?

 http://icehousedesignsummit.sched.org/overview/type/tripleo+%
 28deployment%29#.UnKs1x8u2al

 So any time Wednesday onward?


That broad schedule constraint is good for Glance as well, as all of our
sessions are on Tuesday.




 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-10-30 Thread Mark Washenberger
On Wed, Oct 30, 2013 at 9:04 AM, Eddie Sheffield 
eddie.sheffi...@rackspace.com wrote:



 Mark Washenberger mark.washenber...@markwash.net said:

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   I am not a fan of all the specific talk to glance code we have in
   nova, moving more of that into glanceclient can only be a good thing.
   For the XenServer itegration, for efficiency reasons, we need glance
   to talk from dom0, so it has dom0 making the final HTTP call. So we
   would need a way of extracting that info from the glance client. But
   that seems better than having that code in nova.
 
  I know in Glance we've largely taken the view that the client should be
 as
  thin and lightweight as possible so users of the client can make use of
 it
  however they best see fit. There was an earlier patch that would have
 moved
  the whole image service layer into glanceclient that was rejected. So I
  think there is a division in philosophies here as well.
 
 
 
  Indeed, I think I was being a bit of a stinker on this issue. Mea culpa.
 
  I've had some time to think and I realized that there is a bit of
  complexity here that needs to be untangled. Historically, the glance
 client
  (and I think *most* openstack clients) have had versioned directories
 that
  attempt to be as faithful a representation of the given version of an API
  as possible. That was a history I was trying to maintain for continuity's
  sake in the past.
 
  However, with some more thought, this historical objective seems
 literally
  insane to me. In fact, it makes it basically impossible to publish a
 useful
  client library because such a library has no control to smooth over
  backwards incompatibility from one major version to the next.
 
  At this point I'm a lot more interested in Ghe's patch (
  https://review.openstack.org/#/c/33327/)
 
  I'm a bit concerned that we might need to make the image client interface
  even more stripped down in order to focus support on the intersection of
 v1
  and v2 of the image api. In particular, I'm not sure how well the old
 nova
  image service api will deal with invalid property values (v2 has property
  schemas). And there's no support AFAICT for image sharing, and image
  sharing should not be used in v1 for security reasons.
 
  On the other hand, maybe we don't really want to move forward based on
 how
  nova viewed the image repository in the past. There might be a better
 image
  client api waiting to be discovered by some intrepid openstacker. This
  could make sense as well if there is some traction for eventually
  deprecating the v1 api. But in any case, it does sound like we need an
  image client with its own proper api that can be ported from version to
  version.
 
  /ramble

 Hmmm, pretty big turnaround but one I mostly agree with. I would like to
 see more discussion on what this unified interface would look like rather
 than just pulling in what's in Nova (tho we might converge on that anyway.)
 I do worry about what to do about unique functionality in the various API
 versions. It might be that the most common functionality is exposed in the
 service interface, and if you need some of the more API specific
 functionality you can use the lower-level client interfaces. Alternatively
 the interface might contain everything possible; and where it can, smooth
 over the differences and where it can't, raise NotImplemented exceptions.

 Mark, can we get some discussion of this in our Glance meeting tomorrow
 (10/31)?


Definitely, adding it to the agenda now.




 Eddie



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-10-30 Thread Mark Washenberger
Hi folks,

There will be a team meeting this week on Thursday at 20:00 UTC in
#openstack-meeting-alt. That is, in your timezone:
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131031T20ah=1

The agenda is posted here
https://etherpad.openstack.org/p/glance-team-meeting-agenda . Since its an
etherpad, feel free to add items you'd like to discuss.

I'll preemptively mention that we will definitely *not* be having a meeting
next week, as we will still be mid-summit at the time.

Thanks
-markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-10-29 Thread Mark Washenberger
  I am not a fan of all the specific talk to glance code we have in
  nova, moving more of that into glanceclient can only be a good thing.
  For the XenServer itegration, for efficiency reasons, we need glance
  to talk from dom0, so it has dom0 making the final HTTP call. So we
  would need a way of extracting that info from the glance client. But
  that seems better than having that code in nova.

 I know in Glance we've largely taken the view that the client should be as
 thin and lightweight as possible so users of the client can make use of it
 however they best see fit. There was an earlier patch that would have moved
 the whole image service layer into glanceclient that was rejected. So I
 think there is a division in philosophies here as well.



Indeed, I think I was being a bit of a stinker on this issue. Mea culpa.

I've had some time to think and I realized that there is a bit of
complexity here that needs to be untangled. Historically, the glance client
(and I think *most* openstack clients) have had versioned directories that
attempt to be as faithful a representation of the given version of an API
as possible. That was a history I was trying to maintain for continuity's
sake in the past.

However, with some more thought, this historical objective seems literally
insane to me. In fact, it makes it basically impossible to publish a useful
client library because such a library has no control to smooth over
backwards incompatibility from one major version to the next.

At this point I'm a lot more interested in Ghe's patch (
https://review.openstack.org/#/c/33327/)

I'm a bit concerned that we might need to make the image client interface
even more stripped down in order to focus support on the intersection of v1
and v2 of the image api. In particular, I'm not sure how well the old nova
image service api will deal with invalid property values (v2 has property
schemas). And there's no support AFAICT for image sharing, and image
sharing should not be used in v1 for security reasons.

On the other hand, maybe we don't really want to move forward based on how
nova viewed the image repository in the past. There might be a better image
client api waiting to be discovered by some intrepid openstacker. This
could make sense as well if there is some traction for eventually
deprecating the v1 api. But in any case, it does sound like we need an
image client with its own proper api that can be ported from version to
version.

/ramble
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does DB schema hygiene warrant long migrations?

2013-10-24 Thread Mark Washenberger
On Thu, Oct 24, 2013 at 3:06 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 25 October 2013 10:04, Chris Behrens cbehr...@codestud.com wrote:
 
  On Oct 24, 2013, at 1:33 PM, Robert Collins robe...@robertcollins.net
 wrote:
 
  -2 to 10 minute downtimes.
 
  +1 to doing the evolution gracefully. There is a spec for doing that
  from the H summit; someone just needs to implement it.
 
  +1.  IMO, we need to move to a model where code can understand multiple
 schemas and migrate to newer schema on the fly.  The object code in nova
 will be able to help us do this.  Combine this with some sort of background
 task if you need to speed up the conversion.  Any migrations that need to
 run through all of the data in a table during downtime is just not going to
 scale.
 
  I am personally tired of having to deal with DB migrations having to run
 for 1 hour during upgrades that happened numerous times throughout the
 Havana development cycle.

 We had a clear design at the H summit, and folk committed to working
 on it (Johannes and Mark W); not sure what happened...


/me runs from room crying



 https://etherpad.openstack.org/p/HavanaNoDowntimeDBMigrations

 -Rob
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Mark Washenberger
Hi folks!

In the images api, we depend on iso8601 to parse some dates and times.
Recently, since version 0.1.4, python-iso8601 added support for a few more
formats, and we finally got some other issues nailed down by 0.1.8. Maybe
the fact that these formats weren't supported before was a bug. I don't
really know.

This puts us in an awkward place, however. With the help of our unit tests,
we noticed that, if you switch from 0.1.8 back to 0.1.4 in your deployment,
your image api will lose support for certain datetime formats like
-MM-DD (where the time part is assumed to be all zeros). This obviously
creates a (perhaps small) compatibility concern.

Here are our alternatives:

1) Adopt 0.1.8 as the minimum version in openstack-requirements.
2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and
just fix the tests so they don't care about these extra formats)
3) Make Glance work with the added formats even if 0.1.4 is installed.

As of yesterday we were resolved to do #3, trying to be good citizens. But
it appears that to do so requires essentially reimplementing a large swath
of iso8601 0.1.8 in glance itself. Gross!

So, I'd like to suggest that we instead adopt option #1, or alternatively
agree that option #2 is no big deal, we can all go back to sleep. Thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-10-22 Thread Mark Washenberger
Hi folks,

Just reminding you that we'll have our meeting during the early timeslot
this week, at 14:00 UTC on October 24th. In your locale, that's
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131022T14ah=1.
The channel as always is #openstack-meeting-alt and all are welcome to
attend.

The agenda is currently forming at
https://etherpad.openstack.org/p/glance-team-meeting-agenda so feel free to
add any items you want to discuss.

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Summit proposal deadline

2013-10-22 Thread Mark Washenberger
Hi folks,

Please submit any design summit session proposals in the next 24 hours.

markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-10-15 Thread Mark Washenberger
Hi Glance folks,

There will be a team meeting this week on Thursday at 20:00 UTC in
#openstack-meeting-alt. That is, in your timezone:
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131017T20ah=1

The agenda is posted here
https://etherpad.openstack.org/p/glance-team-meeting-agenda . Since its an
etherpad, feel free to add items you'd like to discuss.

Cheers, and thanks for all your great work on getting out RC2!

-markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting reminder

2013-10-08 Thread Mark Washenberger
Hi Glance folks,

We will have a team meeting this Thursday October 10th at 14:00 UTC in
#openstack-meeting-alt. All are welcome to attend.

For time information in your timezone, see
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131010T14ah=1

The agenda can be found at
https://etherpad.openstack.org/glance-team-meeting-agenda. Feel free to
suggest agenda items.

Thanks,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance PTL candidacy

2013-09-24 Thread Mark Washenberger
Hi all,

I'd like to submit myself as a candidate to continue in the role of Glance
PTL for the Icehouse release cycle.

A bit of history about me: I joined Rackspace's Team Titan back in February
2011, where were initially focused on filling out the OpenStack 1.1 api for
nova. I've been working with Glance apis and core functionality for about 4
cycles now. Since November 2012 I've been an employee of Nebula, working
alongside some of the original NASA OpenStack folks and the former Glance
PTL Brian Waldon.

As Icehouse PTL, I expect to proceed much in the same way as during Havana,
hopefully with lots of realized opportunities for incremental improvement.
That's about the most honest endorsement I can give of myself. I'm very
open to any advice, suggestions, or other contributors who want to take on
more responsibility in the near future. One thing I learned during Havana
was that I was in a better position before becoming PTL to work on the
particular items I care most about, so any sensible responsibility sharing
that would free me up to continue working on refining code organization and
test performance is very welcome.

Thank you,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-09-24 Thread Mark Washenberger
Hi folks,

Just to remind you, we'll be having a team meeting this Thursday, September
26th at 14:00 UTC. For your local time, please see
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20130926T14ah=1
.

In particular, we'll want to make sure we've signed off on RC1 or any last
bugfixes during this week's meeting.

Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Review request for adding ordereddict

2013-09-19 Thread Mark Washenberger
I respect the desires of packagers to have a stable environment, but I'm
also very sad about having to copy the OrderedDict code directly into
Glance. Can we actually verify that this is a problem for packagers? (I.e.
not already in their repos?)

It also may be possible that packagers who do not support python2.6 could
completely avoid this problem if we change how the code is written. Does it
seem possible to only depend on ordereddict if collections.ordereddict does
not exist?


On Mon, Sep 16, 2013 at 11:27 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Mon, Sep 16, 2013 at 11:34 AM, Paul Bourke pauldbou...@gmail.comwrote:

 Hi all,

 I've submitted https://review.openstack.org/#/c/46474/ to add
 ordereddict to openstack/requirements.


 Related thread:
 http://lists.openstack.org/pipermail/openstack-dev/2013-September/015121.html


 The reasoning behind the change is that we want ConfigParser to store
 sections in the order they're read, which is the default behavior in
 py2.7[1], but it must be specified in py2.6.

 The following two Glance features depend on this:

 https://review.openstack.org/#/c/46268/
 https://review.openstack.org/#/c/46283/

 Can someone take a look at this change?

 Thanks,
 -Paul

 [1]
 http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] No meeting this week

2013-09-10 Thread Mark Washenberger
Hi folks,

I will be out this week after 23:00 UTC Wednesday, so I can't hold the
glance meeting this Thursday. I'll be back next Wednesday the 19th.

Let's get those RC1 bugs finished and reviewed!

Thanks!
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] FFE Request: Add swift_store_ssl_compression param

2013-09-09 Thread Mark Washenberger
I buy an exception here if the configuration defaults to off (no change).

Thanks!


On Mon, Sep 9, 2013 at 8:29 AM, Thierry Carrez thie...@openstack.orgwrote:

 stuart.mcla...@hp.com wrote:
  This change:
 
  https://review.openstack.org/#/c/32919/
 
  is the final piece of a puzzle that allows a (potentially significant)
  performance improvement for image uploads (snapshots)/downloads when
  using ssl.

 I guess that's why I like puzzles being targeted to the second milestone
 in the cycle... deferring those is painful.

  Without this patch swift will be a bottleneck running at ~17 MB/s while
  the other parts can potentially reach ~100 MB/s.
 
  Risk: Currently the patch sets compression to be disabled by default
  (giving better performance), but the old behaviour can be reverted
  by setting the relevant config parameter. (We could even potentially
  consider defaulting to the old behaviour.)

 At this stage of the cycle, I would definitely consider defaulting to
 the old behavior for Havana, and then this exception would be a
 no-brainer. With the proposed default, I have to ask how much mileage
 the SSL without compression swift mode has seen, and the risk is
 substantially higher.

  The patch was originally uploaded on Jun 13.

 Yes, but it was WIP-ed for two months waiting for the necessary support
 to land in Swift...

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo.db] Configuration options

2013-08-21 Thread Mark Washenberger
Josh thanks for highlighting this. This example is a good reason why oslo
libraries should decouple their useful bits from any configuration
assumptions. Much of the code already allows use without requiring you to
adopt configuration code. But we should make all of it like that.


On Wed, Aug 21, 2013 at 3:42 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Another question related to making oslo.db a pypi library and relevant
 to how taskflow is used.

  Currently taskflow has a persistence layer, its using a copy of
 oslo-incubator db module to do this.

  That copied code (soon to be library I hope) has the following:

  db_opts = [
 cfg.StrOpt('backend',
default='sqlalchemy',
deprecated_name='db_backend',
deprecated_group='DEFAULT',
help='The backend to use for db'),
 cfg.BoolOpt('use_tpool',
 default=False,
 deprecated_name='dbapi_use_tpool',
 deprecated_group='DEFAULT',
 help='Enable the experimental use of thread pooling for '
  'all DB API calls')
 ]

  Now if oslo.db is a library, and taskflow and the integrated project
 want to use a database backend (potentially a different one) how would that
 be possible with a single library configuration?

  It would seem like the configuration done like this would not allow for
 that, and I could see taskflow having local sqlite as its backend
 (different DB config in this case, same backend), while the integrated
 project using mysql (for whatever its storing).

  Would something like that be possible?

  Thoughts??

  -josh



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread Mark Washenberger
On Tue, Aug 20, 2013 at 3:20 AM, Flavio Percoco fla...@redhat.com wrote:

 On 20/08/13 00:15 -0700, Mark Washenberger wrote:


2) I highly caution folks who think a No-SQL store is a good
 storage
solution for any of the data currently used by Nova, Glance
 (registry),
Cinder (registry), Ceilometer, and Quantum. All of the data stored
 and
manipulated in those projects is HIGHLY relational data, and not
objects/documents. Switching to use a KVS for highly relational
 data is
a terrible decision. You will just end up implementing joins in
 your
code...



+1

FWIW, I'm a huge fan of NoSQL technologies but I couldn't agree more
here.



 I have to say I'm kind of baffled by this sentiment (expressed here and
 elsewhere in the thread.) I'm not a NoSQL expert, but I hang out with a
 few and
 I'm pretty confident Glance at least is not that relational. We do two
 types of
 joins in glance. The first, like image properties, is basically just an
 implementation detail of the sql driver. Its not core to the application.
 Any
 NoSQL implementation will simply completely denormalize those properties
 into
 the image record. (And honestly, so might an optimized SQL
 implementation. . .)

 The second type of join, image_members, is basically just a hack to solve
 the
 problem created because the glance api offers several simultaneous
 implicit
 views of images. Specifically, when you list images in glance, you are
 seeing
 a union of three views: public images, images you own, and images shared
 with
 you. IMO its actually a more scalable and sensible solution to make these
 views
 more explicit and independent in the API and code, taking a lesson from
 filesystems which have to scale to a lot of metadata (notice how
 visibility is
 generally an attribute of a directory, not of regular files in your
 typical
 Unix FS?). And to solve this problem in SQL now we still have to do a
 server-side union, which is a bit sad. But even before we can refactor
 the API
 (v3 anyone?) I don't see it as unworkably slow for a NoSQL driver to track
 these kinds of views.


 You make really good points here but I don't fully agree.


Thanks for your measured response. I wrote my previous response a bit late
at night for me and I hope I wasn't rude :-/


 I don't think the issue is actually translating Glance's models to
 NoSQL or NoSQL db's performance, I'm pretty sure we could benefit in some
 areas but not all of them. To me, and that's what my comment was referring
 to, this is more related to  what kind of data we're actually
 treating, the guarantees we should provide and how they are
 implemented now.

 There are a couple of things that would worry me about an hypothetic
 support for NoSQL but I guess one that I'd consider very critical is
 migrations. Some could argue asking whether we'd really need them or
 not  - when talking about NoSQL databases - but we do. Using a
 schemaless database wouldn't mean we don't have a schema. Migrations
 are not trivial for some NoSQL databases, plus, this would mean
 drivers, most probably, would have to have their own implementation.


I definitely think different drivers will need their own migrations. When
I've been playing around with this refactoring, I created a Migrator
interface and made it part of the driver interface to instantiate an
appropriate migrator object. But I was definitely concerned about a number
of things here. First off, is it just too confusing to have multiple
migrations? The migration sequences will definitely need to be different
per driver. How do we support cross-driver migrations?




  The bigger concern to me is that Glance seems a bit trigger-happy with
 indexes.
 But I'm confident we're in a similar boat there: performance in NoSQL
 won't be
 that terrible for the most important use cases, and a later refactoring
 can put
 us on a more sustainable track in the long run.


 I'm not worried about this, though.


Okay, that is reassuring.



  All I'm saying is that we should be careful not to swap one set of
 problems for another.


  My 2 cents: I am in agreement with Jay.  I am leery of NoSQL being a
 direct sub in and I fear that this effort can be adding a large workload
 for little benefit.


 The goal isn't really to replace sqlalchemy completely. I'm hoping I can
 create
 a space where multiple drivers can operate efficiently without
 introducing bugs
 (i.e. pull all that business logic out of the driver!) I'll be very
 interested
 to see if people can, after such a refactoring, try out some more storage
 approaches, such as dropping the sqlalchemy orm in favor of its generic
 engine
 support or direct sql execution, as well as NoSQL what-have-you. We don't
 have
 to make all of the drivers live in the project, so it really can be a good
 place for interested parties to experiment.


 And this is exactly what I'm concerned about. There's a lot of
 business logic implemented

Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Mark Washenberger
I would prefer to pick and choose which parts of oslo common db code to
reuse in glance. Most parts there look great and very useful. However, some
parts seem like they would conflict with several goals we have.

1) To improve code sanity, we need to break away from the idea of having
one giant db api interface
2) We need to improve our position with respect to new, non SQL drivers
- mostly, we need to focus first on removing business logic (especially
authz) from database driver code
- we also need to break away from the strict functional interface,
because it limits our ability to express query filters and tends to lump
all filter handling for a given function into a single code block (which
ends up being defect-rich and confusing as hell to reimplement)
3) It is unfortunate, but I must admit that Glance's code in general is
pretty heavily coupled to the database code and in particular the schema.
Basically the only tool we have to manage that problem until we can fix it
is to try to be as careful as possible about how we change the db code and
schema. By importing another project, we lose some of that control. Also,
even with the copy-paste model for oslo incubator, code in oslo does have
some of its own reasons to change, so we could potentially end up in a
conflict where glance db migrations (which are operationally costly) have
to happen for reasons that don't really matter to glance.

So rather than framing this as glance needs to use oslo common db code, I
would appreciate framing it as glance database code should have features
X, Y, and Z, some of which it can get by using oslo code. Indeed, I
believe in IRC we discussed the idea of writing up a wiki listing these
feature improvements, which would allow a finer granularity for evaluation.
I really prefer that format because it feels more like planning and less
like debate :-)

 I have a few responses inline below.

On Fri, Aug 16, 2013 at 6:31 AM, Victor Sergeyev vserge...@mirantis.comwrote:

 Hello All.

 Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
 questions about Oslo DB code, and why is it so important to use it instead
 of custom implementation and so on. As there were a lot of questions it was
 really hard to answer on all this questions in IRC. So we decided that
 mailing list is better place for such things.

 List of main questions:

 1. What includes oslo DB code?
 2. Why is it safe to replace custom implementation by Oslo DB code?
 3. Why oslo DB code is better than custom implementation?
 4. Why oslo DB code won’t slow up project development progress?
 5. What we are going actually to do in Glance?
 6. What is the current status?

 Answers:

 1. What includes oslo DB code?

 Currently Oslo code improves different aspects around DB:
 -- Work with SQLAlchemy models, engine and session
 -- Lot of tools for work with SQLAlchemy

-- Work with unique keys
 -- Base test case for work with database
 -- Test migrations against different backends
 -- Sync DB Models with actual schemas in DB (add test that they are
 equivalent)


 2. Why is it safe to replace custom implementation by Oslo DB code?

 Oslo module, as base openstack module, takes care about code quality.
 Usually, common code more readable (most of flake8 checks enabled in Oslo)
 and have better test coverage.  Also it was tested in different use-cases
 (in production also) in an other projects so bugs in Oslo code were already
 fixed. So we can be sure, that we use high-quality code.


Alas, while testing and static style analysis are important, they are not
the only relevant aspects of code quality. Architectural choices are also
relevant. The best reusable code places few requirements on the code that
reuses it architecturally--in some cases it may make sense to refactor oslo
db code so that glance can reuse the correct parts.




 3. Why oslo DB code is better than custom implementation?

 There are some arguments pro Oslo database code

 -- common code collects useful features from different projects
 Different utils, for work with database, common test class, module for
 database migration, and  other features are already in Oslo db code. Patch
 on automatic retry db.api query if db connection lost on review at the
 moment. If we use Oslo db code we should not care, how to port these (and
 others - in the future) features to Glance - it will came to all projects
 automaticly when it will came to Oslo.

 -- unified project work with database
 As it was already said,  It can help developers work with database in a
 same way in different projects. It’s useful if developer work with db in a
 few projects - he use same base things and got no surprises from them.


I'm not very motivated by this argument. I rarely find novelty that
challenging to understand when working with a project, personally. Usually
I'm much more stumped when code is heavily coupled to other modules or too
many responsibilities are lumped together in one module. In general

Re: [openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-14 Thread Mark Washenberger
Lets dig into this a bit more so that I can understand it.

Given that we have properties that we want to export with an image, where
would those properties be stored? Somewhere in the image data itself? I
believe some image formats support metadata, but I can't imagine all of
them would. Is there a specific format you're thinking of using?


On Wed, Aug 14, 2013 at 8:36 AM, Emilien Macchi emilien.mac...@enovance.com
 wrote:

  Hi,


 I would like to discuss here about two blueprint proposal (maybe could I
 merge them into one if you prefer) :

 https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
 https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

 *Use case* :
 I would like to set specific properties to an image which could represent
 a signature, and useful for licensing requirements for example.
 To do that, I should be able to export an image with user properties
 included.

 Then, a user could reuse the exported image in the public cloud, and
 Glance will be aware about its properties.
 Obviously, we need the import / export feature.

 The idea here is to be able to identify an image after cloning or whatever
 with a property field. Of course, the user could break it in editing the
 image manually, but I consider he / she won't.


 Let me know if you have any thoughts and if the blueprint is valuable.

  Regards,

 --
 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
 // 10 rue de la Victoire 75009 Paris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for Nova native image building

2013-08-08 Thread Mark Washenberger
There is a current proposal in glance that is receiving development
attention towards importing images asynchronously from a variety of
sources. The import feature is plugin-based, so it would be easy to add on
the ability and a plugin to do something like importing base os installs.

The blueprint is
https://blueprints.launchpad.net/glance/+spec/new-upload-workflow. It is
currently targeted for Havana-3 (but is probably the first blueprint on the
chopping block due to other dependencies that have not yet landed).

I think this approach probably makes more sense than putting the code
directly into Nova. But overall, I'm somewhat in favor of keeping this
feature out of core OpenStack projects for now. It feels niche enough that
it could live as its own project without burdening its users--most folks
who build base images are probably operations anyway and can deploy stuff.
And I think there are a number of other tools perhaps that make it easier
for the smallest shops to build base images (?)

I don't buy the argument about it being a lot more work to implement this
feature outside of OpenStack. Not that the argument is false, but the
concern seems minor compared to the cost of weighing down core with yet
another feature. From where I'm sitting, OpenStack is still in the too
many features coming too fast regime and architecture hasn't caught up. So
putting on the breaks wherever possible seems like the wisest course.


On Thu, Aug 8, 2013 at 7:33 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Thu, Aug 08, 2013 at 09:28:51AM -0500, Ian McLeod wrote:
  On Wed, 2013-08-07 at 12:05 -0400, Russell Bryant wrote:
   On 08/07/2013 10:46 AM, Daniel P. Berrange wrote:
On Tue, Aug 06, 2013 at 12:20:00PM -0400, Russell Bryant wrote:
On 08/06/2013 11:53 AM, Ian Mcleod wrote:
Hello,
   
A blueprint has been registered regarding API additions to Nova to
enable the creation of base images from external OS install
 sources.
This provides a way to build images from scratch via native OS
 installer
tools using only the resources provided through Nova.  These
 images can
then be further customized by other tools that expect an existing
 image
as an input, such as disk image builder.
   
Blueprint -
https://blueprints.launchpad.net/nova/+spec/base-image-creation
   
Specification -
 https://wiki.openstack.org/wiki/NovaImageCreationAPI
   
If this is a topic that interests you, please have a look (the
 spec is
not very long) and join the conversation.
   
Please note that this blueprint follows on from proof of concept
 work
for native image building discussed on this list in April:
   
   
 http://lists.openstack.org/pipermail/openstack-dev/2013-April/007157.html
   
Thanks of the update on this work.
   
I see that your proof of concept shows how this can work as a tool
outside of Nova:
   
https://github.com/redhat-openstack/image-building-poc
   
So, my biggest question is whether or not it makes sense for this
 to be
a Nova feature or not.  If something can be implemented as a
 consumer of
Nova, my default answer is that it should stay outside of nova
 until I
am convinced otherwise.  :-)
   
It sounds like this is mostly an extension to nova that implements a
series of operations that can be done just as well outside of Nova.
  Are
there enhancements you are making or scenarios that won't work at
 all
unless it lives inside of Nova?
   
If it doesn't end up on the server side, it could potentially be
implemented as an extension to novaclient.
   
I think the key thing is that want to make sure we don't have all
clients apps having to re-invent the wheel. The way the proof of
concept was done as a standalone tool, would entail such wheel
re-invention by any frontend to Nova like the 'nova' cli and
Horizon dashboard. Possibly you could mitigate that if you could
actually put all the smarts in the python-novaclient library API
so it was shared by all frontend apps.
  
   Yeah, I was thinking python-novaclient.  The 'nova' command line tool
 is
   just a wrapper around the library portion.
  
IIUC, though there is some state information that it is desirable
to maintain while the images are being built. You'd probably
such state visible to all clients talking to the same nova instance,
not hidden away in the client side where its only visible to that
single frontend instance.
  
   That's an interesting point.  Do we care about having an image build
   executing by the command line to show up in the UI as an image build?
   Right now it's just going to be another nova instance.  We have to do
 it
   on the server side to do any better.  I'm not even sure it could
   integrate well with Horizon doing it in python-novaclient.  I don't
   think you could start another session and see the operation in
 progress.
 
  Perhaps it's worth revisiting the basic 

Re: [openstack-dev] Validating Flavor IDs

2013-08-06 Thread Mark Washenberger
It seems like this is a bug in python-novaclient. I believe the recent
change to enforce that flavor ids are either int-like or uuid-like may have
been made in error. At minimum, I believe it is backwards-incompatible,
despite being part of a minor point release (changed from 2.13 to 2.14).

See https://review.openstack.org/#/c/29086/ for the review where this
behavior was changed.

See also https://bugs.launchpad.net/python-novaclient/+bug/1209060



On Thu, Jul 25, 2013 at 10:36 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 The data type is string.

 Vish
 On Jul 24, 2013, at 1:41 AM, Karajgi, Rohit rohit.kara...@nttdata.com
 wrote:

  Hi,
 
  Referring to https://bugs.launchpad.net/nova/+bug/1202136, it seems
 that the novaclient
  validates the flavor ID to be either an integer or a UUID string. This
 check does not exist in Nova, so currently strings
  are also accepted as flavor IDs by Nova when direct restful API calls
 are made.
 
  What should the data type of a flavor's ID be?
 
  -Rohit
 
  __
  Disclaimer:This email and any attachments are sent in strictest
 confidence for the sole use of the addressee and may contain legally
 privileged, confidential, and proprietary data.  If you are not the
 intended recipient, please advise the sender by replying promptly to this
 email and then delete and destroy this email and any attachments without
 any further use, copying or forwarding
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Mark Washenberger
On Mon, Aug 5, 2013 at 7:26 AM, John Garbutt j...@johngarbutt.com wrote:

 On 3 August 2013 03:07, Christopher Yeoh cbky...@gmail.com wrote:
  Some people had concerns about exposing the glance api publicly and so
  wanted to retain the images support in Nova.
  So the consensus seemed to be to leave the images support in, but to
 demote
  it from core. So people who don't want it exclude the os-images
 extension.

 I think a lot of the concern was around RBAC, but seems most of that
 will be fixed by the end of Havana:
 https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection


I don't think this is a big issue. The RBAC approach to properties is just
an attempt to formalize what large public clouds are already doing in their
forks to manage info about image billing. Its not really a critical blocker
for public adoption.




 Given v3 is will not be finished till Icehouse, maybe we should look
 at removing os-images extension for now, and putting it back in for
 Icehouse if it causes people real headaches?

  Just as I write this I've realised that the servers api currently returns
  links to the image used for the instance. And that won't be valid if the
  images extension is not loaded. So probably have some work to do there to
  support  that properly.

 Have we decided a good strategy for this in v3? Referring to image in
 glance, and networks and ports in neutron.

 The pragmatic part of me says:
 * just use the uuid, its what the users will input when booting servers

 But I wonder if a REST purest would say:
 * an image is a REST resource, so we should have a URL pointing to the
 exposed glance service?


 What do you think? I just want to make sure we make a deliberate choice.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Reminder: This Week's Meeting

2013-07-31 Thread Mark Washenberger
Hi folks,

We will be having a team meeting for Glance in #openstack-meeting-alt at
14:00 UTC Thursday August 1st--that's our early time slot. All are welcome
to attend.

markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] property protections -- final call for comments

2013-07-26 Thread Mark Washenberger
On Fri, Jul 26, 2013 at 9:56 AM, stuart.mcla...@hp.com wrote:

 Hi Brian,

 Firstly, thanks for all your great work here!

 Some feedback:

 1) Is there a clash with existing user properties?

 For currently deployed systems a user may have an existing property 'foo:
 bar'.
 If we restrict property access (by virtue of allowing only owner_xxx)
 can the user update this previously existing property?


No, a user would not be able to update the previously existing property.
However, I do not view requiring owner_ as a prefix for generic metadata
properties to be the typical use case, so I am not concerned about this
conflict. Those who wish to take on the extra responsibility of completely
isolating owner metadata into a prefix may also take on the responsibility
of migrating existing general properties to that prefix.



 2) A nice feature of this scheme is that the cloud provider can pick an
 arbitrary
 informal namespace for this purpose and educate users appropriately.

 How about having the user properties area be always the same?
 It would be more consistent/predictable -- is there a down side?


I'm not sure that the need is great enough--the downside is that this user
properties area may not be appropriate for a majority of deployers.


 3) we could potentially link roles to the regex

 eg this could allow role1_xxx to be writable only if you have 'role1'.
 By assigning appropriate roles (com.provider/com.partner/**nova?) you
 could provide the ability to write to that prefix without config file
 changes.

 Thanks,

 -Stuart

  After lots of discussion, I think we've come to a consensus on what
 property protections should look like in Glance.  Please reply with
 comments!

 The blueprint: https://blueprints.launchpad.**net/glance/+spec/api-v2-**
 property-protectionhttps://blueprints.launchpad.net/glance/+spec/api-v2-property-protection

 The full specification: https://wiki.openstack.org/**
 wiki/Glance-property-**protectionshttps://wiki.openstack.org/wiki/Glance-property-protections
   (it's got a Prior Discussion section with links to the discussion
 etherpads)

 A product approach to describing the feature:
 https://wiki.openstack.org/**wiki/Glance-property-**protections-producthttps://wiki.openstack.org/wiki/Glance-property-protections-product

 cheers,
 brian


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] need to pin jsonschema version for glance?

2013-07-17 Thread Mark Washenberger
On Wed, Jul 17, 2013 at 7:16 AM, Matt Riedemann mrie...@us.ibm.com wrote:

 I recently synched up on the latest glance and ran tempest on my RHEL 6.3
 box and the image v2 tests all started failing due to json schema
 validation errors:

 *http://paste.openstack.org/show/40684/*http://paste.openstack.org/show/40684/

 I found that the version of jsonschema on the system is 0.7, probably
 because of the dependency from warlock in python-glanceclient:

 *
 https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8
 *https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8

 I started looking at what recent changes in glance might be causing the
 issue and I found this one:

 *https://review.openstack.org/#/c/35134/*https://review.openstack.org/#/c/35134/

 As pointed out in the test output from that patch, since there is no
 version constraint on jsonschema in glance or tempest, it's getting the
 latest version from pypi (2.0.0 in this case).

 When I updated my test box to jsonschema 1.3.0, I got past the schema
 validation error.

 So this leads me to believe that we need to pin the jsonschema version in
 glance and tempest to = 1.3.0.

 Thoughts?


This sounds correct. Another alternative would be to switch back to the
old syntax and pin  1.3.0, which sounds like its not really forward
progress, but might be easier.





 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 --
  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com
 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >