Re: [openstack-dev] [nova][glance] Format of 'locations' data in image metadata ?

2015-05-20 Thread Zhi Yan Liu
On Wed, May 20, 2015 at 5:06 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, May 20, 2015 at 12:01:37AM +0200, Flavio Percoco wrote:
 On 19/05/15 17:19 +0100, Daniel P. Berrange wrote:
 In Nova we are attempting to model[1] the glance image metadata and
 properties using the Nova object model (now oslo.versionedobjects).
 
 The one item I'm stuck on understanding is the 'locations' field
 and more specifically the 'metadata' element in each location
 entry
 
 
 In the file glance/api/v2/images.py I can see this description
 of the data format:
 
'locations': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'url': {
'type': 'string',
'maxLength': 255,
},
'metadata': {
'type': 'object',
},
},
'required': ['url', 'metadata'],
},
'description': _('A set of URLs to access the image file kept in 
  '
 'external store'),
 
 
 As you can see here, 'metadata' is just said to be of type 'object'.
 
 Is there somewhere that actually describes what is valid contents
 for this field ? Is it sufficient to assume the metadata will only
 ever be a dict of strings, or can the metadata be a complex type
 with arbitrarily nested data structures ?

 It's just arbitrary metadata for now, we don't have a specific format.
 I'm curious to know if there are folks using this field. We do (did)
 have a use case for it.

 Yep, I'd be curious to understand just what it is used for in practice ?
 Is the data to be stored in there determined by python code, or by the
 local administrator or both ?


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Yes, It determined by python code in nova as a part of image download
plugin, and administrator needs to prepare it based on particular
deployment environment as well. Current a usage is to accelerate image
download from nfs store to nova compute node, now there is only one
particular plugin in nova upstream tree [0]. (from the logic in
_file_system_lookup(), i think a predefined 'id' is needed in the
metadata of the location entry).

[0] 
https://github.com/openstack/nova/blob/master/nova/image/download/file.py#L150

zhiyan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Format of 'locations' data in image metadata ?

2015-05-20 Thread Zhi Yan Liu
On Wed, May 20, 2015 at 5:23 PM, Zhi Yan Liu lzy@gmail.com wrote:
 On Wed, May 20, 2015 at 5:06 PM, Daniel P. Berrange berra...@redhat.com 
 wrote:
 On Wed, May 20, 2015 at 12:01:37AM +0200, Flavio Percoco wrote:
 On 19/05/15 17:19 +0100, Daniel P. Berrange wrote:
 In Nova we are attempting to model[1] the glance image metadata and
 properties using the Nova object model (now oslo.versionedobjects).
 
 The one item I'm stuck on understanding is the 'locations' field
 and more specifically the 'metadata' element in each location
 entry
 
 
 In the file glance/api/v2/images.py I can see this description
 of the data format:
 
'locations': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'url': {
'type': 'string',
'maxLength': 255,
},
'metadata': {
'type': 'object',
},
},
'required': ['url', 'metadata'],
},
'description': _('A set of URLs to access the image file kept 
  in '
 'external store'),
 
 
 As you can see here, 'metadata' is just said to be of type 'object'.
 
 Is there somewhere that actually describes what is valid contents
 for this field ? Is it sufficient to assume the metadata will only
 ever be a dict of strings, or can the metadata be a complex type
 with arbitrarily nested data structures ?

 It's just arbitrary metadata for now, we don't have a specific format.
 I'm curious to know if there are folks using this field. We do (did)
 have a use case for it.

 Yep, I'd be curious to understand just what it is used for in practice ?
 Is the data to be stored in there determined by python code, or by the
 local administrator or both ?


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Yes, It determined by python code in nova as a part of image download
 plugin, and administrator needs to prepare it based on particular
 deployment environment as well. Current a usage is to accelerate image
 download from nfs store to nova compute node, now there is only one
 particular plugin in nova upstream tree [0]. (from the logic in
 _file_system_lookup(), i think a predefined 'id' is needed in the
 metadata of the location entry).

 [0] 
 https://github.com/openstack/nova/blob/master/nova/image/download/file.py#L150

 zhiyan

Btw, for your question:

 Is there somewhere that actually describes what is valid contents
 for this field ? Is it sufficient to assume the metadata will only
 ever be a dict of strings, or can the metadata be a complex type
 with arbitrarily nested data structures ?

for current nova in-tree image download plugin (above [0]), the schema
of location metadata should be this:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/filesystem.py#L72

zhiyan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread Zhi Yan Liu
I'd really like this idea,  async call will definitely improve overall
performance for cloud control system. In nova (and other components)
there are some slow tasks which handle resource with long time
running, which makes new tasks get a huge delay before getting served,
especially for the high concurrent request (e.g. provisioning hundreds
VMs) with high delay operation for the resource handling case.

To really archive the result of improving system overall performance,
I think the biggest challenge is it must makes all operations cross
components to be asynchronous in the handling pipeline, a synchronous
operation in the call path will makes the workflow still be
synchronous, the system still need to wait this synchronous operation
to be finish and it will makes delay/waiting keep there, and this kind
of synchronous operation are very familiar around the resource
handling case for now.

thanks,
zhiyan

On Thu, May 7, 2015 at 6:05 PM, ozamiatin ozamia...@mirantis.com wrote:
 Hi,

 I generally like the idea of async CALL. Is there a place in Nova (or
 other services) where the new CALL may be applied to see advantage?

 Thanks,
 Oleksii Zamiatin

 07.05.15 12:34, Sahid Orentino Ferdjaoui пишет:

 Hi,

 The primary point of this expected discussion around asynchronous
 communication is to optimize performance by reducing latency.

 For instance the design used in Nova and probably other projects let
 able to operate ascynchronous operations from two way.

 1. When communicate between inter-services
 2. When communicate to the database

 1 and 2 are close since they use the same API but I prefer to keep a
 difference here since the high level layer is not the same.

  From Oslo Messaging point of view we currently have two methods to
 invoke an RPC:

Cast and Call: The first one is not bloking and will invoke a RPC
  without to wait any response while the second will block the
  process and wait for the response.

 The aim is to add new method which will return without to block the
 process an object let's call it Future which will provide some basic
 methods to wait and get a response at any time.

 The benefice from Nova will comes on a higher level:

 1. When communicate between services it will be not necessary to block
 the process and use this free time to execute some other
 computations.

future = rpcapi.invoke_long_process()
   ... do something else here ...
result = future.get_response()

 2. We can use the benefice of all of the work previously done with the
 Conductor and so by updating the framework Objects and Indirection
 Api we should take advantage of async operations to the database.

 MyObject = MyClassObject.get_async()
   ... do something else here ...
 MyObject.wait()

 MyObject.foo = bar
 MyObject.save_async()
   ... do something else here ...
 MyObject.wait()

 All of this is to illustrate and have to be discussed.

 I guess the first job needs to come from Oslo Messaging so the
 question is to know the feeling here and then from Nova since it will
 be the primary consumer of this feature.

 https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

 Thanks,
 s.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-12 Thread Zhi Yan Liu
I'd prefer 1400UTC.

zhiyan

On Mon, Mar 9, 2015 at 4:07 AM, Nikhil Komawar
nikhil.koma...@rackspace.com wrote:

 Hi all,


 Currently, we've alternating time for Glance meetings. Now, with the
 Daylight savings being implemented in some parts of the world, we're
 thinking of moving the meeting time to just one slot i.e. earlier in the
 day(night). This solves the original conflicting times issue that a subset
 of the individuals had; to add to that the schedule is less confusing and
 unified.


 So, the new proposal is:

 Glance meetings [1] to be conducted weekly on Thursdays at 1400UTC [2] on
 #openstack-meeting-4


 This would be implemented on Mar 19th, given there are no major objections.


 Please vote with +1/-1 here.


 [1] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

 [2] http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec=0


 Thanks,
 -Nikhil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled'

2014-12-02 Thread Zhi Yan Liu
I totally agreed to make it to be consistent cross all projects, so I
propose to change other projects.

But I think keeping it as-it is clear enough for both developer and
operator/configuration, for example:

[profiler]
enable = True

instead of:

[profiler]
profiler_enable = True

Tbh, the profiler prefix is redundant to me still from the
perspective of operator/configuration.

zhiyan


On Tue, Dec 2, 2014 at 7:44 PM, Louis Taylor krag...@gmail.com wrote:
 On Tue, Dec 02, 2014 at 12:16:44PM +0800, Zhi Yan Liu wrote:
 Why not change other services instead of glance? I see one reason is
 glance is the only one service use this option name, but to me one
 reason to keep it as-it in glance is that original name makes more
 sense due to the option already under profiler group, adding
 profiler prefix to it is really redundant, imo, and in other
 existing config group there's no one go this naming way. Then in the
 code we can just use a clear way:

 CONF.profiler.enabled

 instead of:

 CONF.profiler.profiler_enabled

 thanks,
 zhiyan

 I agree this looks nicer in the code. However, the primary consumer of this
 option is someone editing it in the configuration files. In this case, I
 believe having something more verbose and consistent is better than the Glance
 code being slightly more elegant.

 One name or the other doesn't make all that much difference, but consistency 
 in
 how we turn osprofiler on and off across projects would be best.

 - Louis

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Zhi Yan Liu
Greetings,

On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.

With my idea, it not only for VM provisioning and consuming feature
but also for implementing a consistent and unified block storage
backend for image store.  For historical reasons, we have implemented
a lot of duplicated block storage drivers between glance and cinder,
IMO, cinder could regard as a full-functional block storage backend
from OpenStack's perspective (I mean it contains both data and control
plane), glance could just leverage cinder as a unified block storage
backend. Essentially, Glance has two kind of drivers, block storage
driver and object storage driver (e.g. swift and s3 driver),  from
above opinion, I consider to give glance a cinder driver is very
sensible, it could provide a unified and consistent way to access
different kind of block backend instead of implement duplicated
drivers in both projects.

I see some people like to see implementing similar drivers in
different projects again and again, but at least I think this is a
hurtless and beneficial feature/driver.


 While I still don't fully understand the need of this driver, I think
 there's a bigger problem we need to solve now. We have a partially
 implemented driver that is almost useless and it's creating lots of
 confusion in users that are willing to use it but keep hitting 500
 errors because there's nothing they can do with it except for creating
 an image that points to an existing volume.

 I'd like us to discuss what the exact plan for this driver moving
 forward is, what is missing and whether it'll actually be completed
 during Kilo.

I'd like to enhance cinder driver of course, but currently it blocked
on one thing it needs a correct people believed way [0] to access
volume from Glance (for both data and control plane, e.g. creating
image and upload bits). During H cycle I was told cinder will release
a separated lib soon, called Brick[0], which could be used from other
project to allow them access volume directly from cinder, but seems it
didn't ready to use still until now. But anyway, we can talk this with
cinder team to get Brick moving forward.

[0] https://review.openstack.org/#/c/20593/
[1] https://wiki.openstack.org/wiki/CinderBrick

I really appreciated if somebody could show me a clear plan/status on
CinderBrick, I still think it's a good way to go for glance cinder
driver.


 If there's a slight chance it won't be completed in Kilo, I'd like to
 propose getting rid of it - with a deprecation period, I guess - and
 giving it another chance in the future when it can be fully implemented.

 [0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
 [1] https://review.openstack.org/#/c/32864/


It obviously depends, according to my above information, but I'd like to try.

zhiyan

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Zhi Yan Liu
Replied in inline.

On Wed, Oct 22, 2014 at 9:33 PM, Flavio Percoco fla...@redhat.com wrote:
 On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
 Greetings,

 On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.

 With my idea, it not only for VM provisioning and consuming feature
 but also for implementing a consistent and unified block storage
 backend for image store.  For historical reasons, we have implemented
 a lot of duplicated block storage drivers between glance and cinder,
 IMO, cinder could regard as a full-functional block storage backend
 from OpenStack's perspective (I mean it contains both data and control
 plane), glance could just leverage cinder as a unified block storage
 backend. Essentially, Glance has two kind of drivers, block storage
 driver and object storage driver (e.g. swift and s3 driver),  from
 above opinion, I consider to give glance a cinder driver is very
 sensible, it could provide a unified and consistent way to access
 different kind of block backend instead of implement duplicated
 drivers in both projects.

 Let me see if I got this right. You're suggesting to have a cinder
 driver in Glance so we can basically remove the
 'create-volume-from-image' functionality from Cinder. is this right?


I don't think we need to remove any feature as an existing/reasonable
use case from end user's perspective, 'create-volume-from-image' is a
useful function and need to keep as-is to me, but I consider we can do
some changes for internal implementation if we have cinder driver for
glance, e.g. for this use case, if glance store image as a volume
already then cinder can create volume effectively - to leverage such
capability from backend storage, I think this case just like ceph
current way on this situation (so a duplication example again).

 I see some people like to see implementing similar drivers in
 different projects again and again, but at least I think this is a
 hurtless and beneficial feature/driver.

 It's not as harmless as it seems. There are many users confused as to
 what the use case of this driver is. For example, should users create
 volumes from images? or should the create images that are then stored in
 a volume? What's the difference?

I'm not sure I understood all concerns from those folks, but for your
examples, one key reason I think is that they still think it in
technical way to much. I mean create-image-from-volume and
create-volume-from-image are useful and reasonable _use case_ from end
user's perspective because volume and image are totally different
concept for end user in cloud context (at least, in OpenStack
context), the benefits/purpose of leverage cinder store/driver in
glance is not to change those concepts and existing use case for end
user/operator but to try to help us implement those feature
efficiently in glance and cinder inside, IMO, including low the
duplication as much as possible which as I mentioned before. So, in
short, I see the impact of this idea is on _implementation_ level,
instead on the exposed _use case_ level.


 Technically, the answer is probably none, but from a deployment and
 usability perspective, there's a huge difference that needs to be
 considered.

According to my above explanations, IMO, this driver/idea couldn't
(and shouldn't) break existing concept and use case for end
user/operator, but if I still miss something pls let me know.

zhiyan


 I'm not saying it's a bad idea, I'm just saying we need to get this
 story straight and probably just pick one (? /me *shrugs*)

 While I still don't fully understand the need of this driver, I think
 there's a bigger problem we need to solve now. We have a partially
 implemented driver that is almost useless and it's creating lots of
 confusion in users that are willing to use it but keep hitting 500
 errors because there's nothing they can do with it except for creating
 an image that points to an existing volume.

 I'd like us to discuss what the exact plan for this driver moving
 forward is, what is missing and whether it'll actually be completed
 during Kilo.

 I'd like to enhance cinder driver of course, but currently it blocked
 on one thing it needs a correct people believed way [0] to access
 volume from Glance (for both data and control plane, e.g. creating
 image and upload bits). During H cycle I was told cinder will release
 a separated lib soon, called Brick[0], which could be used from other
 project to allow them access volume directly from cinder, but seems it
 didn't ready to use still until now. But anyway, we can talk this with
 cinder team to get Brick moving forward.

 [0] https://review.openstack.org/#/c/20593/
 [1] https://wiki.openstack.org/wiki/CinderBrick

 I really appreciated

Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules

2014-10-16 Thread Zhi Yan Liu
Thanks Doug for your reminder/message!

Field https://bugs.launchpad.net/glance/+bug/1381870 for glance stuff.

btw, currently I prepared three patches to fix this defect, any input
are welcome:

https://review.openstack.org/#/c/127487/
https://review.openstack.org/#/c/127923/
https://review.openstack.org/#/c/128837/

zhiyan

On Tue, Oct 14, 2014 at 4:54 AM, Nikhil Manchanda nik...@manchanda.me wrote:

 Thanks for putting this together Doug!

 I've opened https://bugs.launchpad.net/trove/+bug/1380789 to track the
 changes that are needed here for Trove.

 Cheers,
 Nikhil


 Doug Hellmann writes:

 I’ve put together a little script to generate a report of the projects using 
 modules that used to be in the oslo-incubator but that have moved to 
 libraries [1]. These modules have been deleted, and now only exist in the 
 stable/juno branch of the incubator. We do not anticipate back-porting fixes 
 except for serious security concerns, so it is important to update all 
 projects to use the libraries where the modules now live.

 Liaisons, please look through the list below and file bugs against your 
 project for any changes needed to move to the new libraries and start 
 working on the updates. We need to prioritize this work for early in Kilo to 
 ensure that your projects do not fall further out of step. K-1 is the ideal 
 target, with K-2 as an absolute latest date. I anticipate having several 
 more libraries by the time the K-2 milestone arrives.

 Most of the porting work involves adding dependencies and updating import 
 statements, but check the documentation for each library for any special 
 guidance. Also, because the incubator is updated to use our released 
 libraries, you may end up having to port to several libraries *and* sync a 
 copy of any remaining incubator dependencies that have not graduated all in 
 a single patch in order to have a working copy. I suggest giving your review 
 teams a heads-up about what to expect to avoid -2 for the scope of the patch.

 Doug


 [1] https://review.openstack.org/#/c/127039/


 openstack-dev/heat-cfnclient: exception
 openstack-dev/heat-cfnclient: gettextutils
 openstack-dev/heat-cfnclient: importutils
 openstack-dev/heat-cfnclient: jsonutils
 openstack-dev/heat-cfnclient: timeutils

 openstack/ceilometer: gettextutils
 openstack/ceilometer: log_handler

 openstack/python-troveclient: strutils

 openstack/melange: exception
 openstack/melange: extensions
 openstack/melange: utils
 openstack/melange: wsgi
 openstack/melange: setup

 openstack/tuskar: config.generator
 openstack/tuskar: db
 openstack/tuskar: db.sqlalchemy
 openstack/tuskar: excutils
 openstack/tuskar: gettextutils
 openstack/tuskar: importutils
 openstack/tuskar: jsonutils
 openstack/tuskar: strutils
 openstack/tuskar: timeutils

 openstack/sahara-dashboard: importutils

 openstack/barbican: gettextutils
 openstack/barbican: jsonutils
 openstack/barbican: timeutils
 openstack/barbican: importutils

 openstack/kite: db
 openstack/kite: db.sqlalchemy
 openstack/kite: jsonutils
 openstack/kite: timeutils

 openstack/python-ironicclient: gettextutils
 openstack/python-ironicclient: importutils
 openstack/python-ironicclient: strutils

 openstack/python-melangeclient: setup

 openstack/neutron: excutils
 openstack/neutron: gettextutils
 openstack/neutron: importutils
 openstack/neutron: jsonutils
 openstack/neutron: middleware.base
 openstack/neutron: middleware.catch_errors
 openstack/neutron: middleware.correlation_id
 openstack/neutron: middleware.debug
 openstack/neutron: middleware.request_id
 openstack/neutron: middleware.sizelimit
 openstack/neutron: network_utils
 openstack/neutron: strutils
 openstack/neutron: timeutils

 openstack/tempest: importlib

 openstack/manila: excutils
 openstack/manila: gettextutils
 openstack/manila: importutils
 openstack/manila: jsonutils
 openstack/manila: network_utils
 openstack/manila: strutils
 openstack/manila: timeutils

 openstack/keystone: gettextutils

 openstack/python-glanceclient: importutils
 openstack/python-glanceclient: network_utils
 openstack/python-glanceclient: strutils

 openstack/python-keystoneclient: jsonutils
 openstack/python-keystoneclient: strutils
 openstack/python-keystoneclient: timeutils

 openstack/zaqar: config.generator
 openstack/zaqar: excutils
 openstack/zaqar: gettextutils
 openstack/zaqar: importutils
 openstack/zaqar: jsonutils
 openstack/zaqar: setup
 openstack/zaqar: strutils
 openstack/zaqar: timeutils
 openstack/zaqar: version

 openstack/python-novaclient: gettextutils

 openstack/ironic: config.generator
 openstack/ironic: gettextutils

 openstack/cinder: config.generator
 openstack/cinder: excutils
 openstack/cinder: gettextutils
 openstack/cinder: importutils
 openstack/cinder: jsonutils
 openstack/cinder: log_handler
 openstack/cinder: network_utils
 openstack/cinder: strutils
 openstack/cinder: timeutils
 openstack/cinder: units

 openstack/python-manilaclient: gettextutils
 

Re: [openstack-dev] [Glance] PTL Non-Candidacy

2014-09-24 Thread Zhi Yan Liu
Hi Mark,

Many thanks for your great work and leadership! Personally I have to
say thank you for your mentorship for me. Let's still keep in touch in
Glance/OpenStack.

zhiyan

On Tue, Sep 23, 2014 at 1:22 AM, Mark Washenberger
mark.washenber...@markwash.net wrote:
 Greetings,

 I will not be running for PTL for Glance for the Kilo release.

 I want to thank all of the nice folks I've worked with--especially the
 attendees and sponsors of the mid-cycle meetups, which I think were a major
 success and one of the highlights of the project for me.

 Cheers,
 markwash

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-03 Thread Zhi Yan Liu
Two inline replies, throw out my quick thougths.

On Sat, Aug 2, 2014 at 4:41 AM, Jay Pipes jaypi...@gmail.com wrote:
 cc'ing ML since it's an important discussion, IMO...


 On 07/31/2014 11:54 AM, Arnaud Legendre wrote:

 Hi Jay,

 I would be interested if you could share your point of view on this
 item: we want to make the glance stores a standalone library
 (glance.stores) which would be consumed directly by Nova and Cinder.


 Yes, I have been enthusiastic about this effort for a long time now :) In
 fact, I have been pushing a series of patches (most merged at this point) in
 Nova to clean up the (very) messy nova.image.glance module and standardize
 the image API in Nova.

 The messiest part of the current image API in Nova, by far, is the
 nova.image.glance.GlanceImageService.download() method, which you highlight
 below. The reason it is so messy is that the method does different things
 (and returns different things!) depending on how you call it and what
 arguments you provide. :(


 I think it would be nice to get your pov since you worked a lot on
 the Nova image interface recently. To give you an example:

 Here
 https://github.com/openstack/nova/blob/master/nova/image/glance.py#L333,
  we would do:

 1. location = get_image_location(image_id),
 2. get(location) from the
 glance.stores library like for example rbd
 (https://github.com/openstack/glance/blob/master/glance/store/rbd.py#L206)


 Yup. Though I'd love for this code to live in olso, not glance...

 Plus, I'd almost prefer to see an interface that hides the location URIs
 entirely and makes the discovery of those location URIs entirely
 encapsulated within glance.store. So, for instance, instead of getting the
 image location using a call to glanceclient.show(), parsing the locations
 collection from the v2 API response, and passing that URI to the
 glance.store.get() function, I'd prefer to see an interface more like this:

 ```python
 # This code would go in a new nova.image.API.copy() method:
 import io

 from oslo.image import move
 from oslo.image.move import exception as mexc

 from nova import exception as exc

 ...
 def copy(image_id_or_uri, stream_writer):
 try:
 config = {
# Some Nova CONF options...
 }
 mover = move.Mover(image_id_or_uri, config)
 success, bytes_written = mover.copy(stream_writer)
 if success:
 if bytes_written == 0:
 LOG.info(Copied image %s using zero-copy 
  transfer., image_id_or_uri)
 else:
 LOG.info(Copied image %s using standard 
  filesystem copy. Copied %d bytes.,
  image_id_or_uri, bytes_written)
 return success
 except mexc.ImageNotFound:
 raise exc.NotFound(...)
 except mexc.ImageInvalidApi:
 # Fall back to pull image from Glance
 # API server via HTTP and write to disk
 # via the stream_writer argument's write()
 # interface... and return True or False
 # depending on whether write()s succeeded
 ```


This idea looks more neat, but I'm a little worries on implementation
since most CoW based zero-copy and smart full-copy approaches need
leverage the capability from particular storage (e.g. ceph) and/or
hypervisor (e.g. vsphere), so IMHO we almost couldn't to make a
consistent logic of zero-copy or smart full-copy/transferring and
encapsulate them into glance.store (or oslo.image) that separated from
special hypervisor and storage context, only if we do necessary
function that hypervisor and storage needed in the lib internal.

 And then, the caller of such an nova.image.API.copy() function would be in
 the existing various virt utils and imagebackends, and would call the API
 function like so:

 ```python
 # This code would go in something like nova.virt.libvirt.utils:

 from nova import image

 IMAGE_API = image.API()

 write_file = io.FileIO(dst_path, mode='wb')
 writer = io.BufferedWriter(write_file)

 image_id_or_uri = https://images.example.com/images/123;

 result = IMAGE_API.copy(image_id_or_uri, writer)
 # Test result if needed...
 ```

 Notice that the caller never needs to know about the locations collection of
 the image -- and thus we correct the leaked implementation details that
 currently ooze out of the download() method in
 nova.image.glance.GlanceImageService.download.

 Also note that we no longer pass a variety of file descriptors, file
 writers, file destination paths to the download method. Instead, we always
 just pass the image ID or URI and a writeable bytestream iterator. And we
 always return either True or False, instead of None or a file iterator
 depending on the supplied arguments to download().


  The same kind of logic could be added in Cinder.


 Sure.


 We see that as a benefit for Nova, which would be able to directly

Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Zhi Yan Liu
Frankly I don't like the idea of using single configuration for all
service too, I think it will be cool if we can generate separated
configuration template files automatically for each Glance service. So
besides https://review.openstack.org/#/c/83327/ , actually I'm working
on that idea as well, to allow deployer generates separated
configuration files on demand, and then probably we could move those
templates away from code repo.

But I like your idea for paste.ini template part.

zhiyan

On Tue, Jun 17, 2014 at 10:29 PM, Kuvaja, Erno kuv...@hp.com wrote:
 I do not like this idea. As now we are on 5 different config files (+ policy 
 and schema). One for each (API and Registry) would still be ok, but putting 
 all together would just become messy.

 If the *-paste.ini will be migrated to .conf files that would bring it down, 
 but please do not try to mix reg and API configs together.

 - Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file

 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   https://blueprints.launchpad.net/glance/+spec/use-oslo-config
 

 w.r.t using config.generator https://review.openstack.org/#/c/83327/

 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?

 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.

 Thanks for working on this.
 Flavio

 --
 @flaper87
 Flavio Percoco
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-05-30 Thread Zhi Yan Liu
Hello Hemanth,

Thanks for your summary, and raise it up in ML.

All of them are sensible to me, there is only one concern from
implementation perspective for me like to get folks notice.

If we follow 'POST /images/{image_id}/actions/{action_type}' approach,
I think we will hard to write common code on wsgi handling level (just
like current Nova approach did [0]) and keep router code be clear
(e.g. [1]) - don't need to add different rule to mapper for different
function/action be case by case. Rather, this way is straightforward
and I agreed those three reasons are understandable, but TBH probably
we need to think about it from implementation perspective a little
bit, follow this way we need to write more duplicated code for each
function/action on different code place/layer, e.g. [2] for route
layer. And for rate-limiting requirement, if we go 'POST
/servers/{server_id}/action {type: action_type, ...}' way,
probably we can do the limiting on wsgi/common layer easily as well,
of course we could design it later base on the selection.

[0] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L1053
[1] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/server_start_stop.py#L43
[2] https://github.com/openstack/glance/blob/master/glance/api/v2/router.py#L81

zhiyan

On Sat, May 31, 2014 at 2:22 AM, Hemanth Makkapati
hemanth.makkap...@rackspace.com wrote:
 Hello All,
 I'm writing to notify you of the approach the Glance community has decided
 to take for doing functional API.  Also, I'm writing to solicit your
 feedback on this approach in the light of cross-project API consistency.

 At the Atlanta Summit, the Glance team has discussed introducing functional
 API in Glance so as to be able to expose operations/actions that do not
 naturally fit into the CRUD-style. A few approaches are proposed and
 discussed here. We have all converged on the approach to include 'action'
 and action type in the URL. For instance, 'POST
 /images/{image_id}/actions/{action_type}'.

 However, this is different from the way Nova does actions. Nova includes
 action type in the payload. For instance, 'POST /servers/{server_id}/action
 {type: action_type, ...}'. At this point, we hit a cross-project API
 consistency issue mentioned here (under the heading 'How to act on resource
 - cloud perform on resources'). Though we are differing from the way Nova
 does actions and hence another source of cross-project API inconsistency ,
 we have a few reasons to believe that Glance's way is helpful in certain
 ways.

 The reasons are as following:
 1. Discoverability of operations.  It'll be easier to expose permitted
 actions through schemas a json home document living at
 /images/{image_id}/actions/.
 2. More conducive for rate-limiting. It'll be easier to rate-limit actions
 in different ways if the action type is available in the URL.
 3. Makes more sense for functional actions that don't require a request body
 (e.g., image deactivation).

 At this point we are curious to see if the API conventions group believes
 this is a valid and reasonable approach.
 Any feedback is much appreciated. Thank you!

 Regards,
 Hemanth Makkapati

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-24 Thread Zhi Yan Liu
First, after the discussing between I and Vincent, I'm sure what we
talked for zero-copying in above mail threads is different with
full-copying/transferring. The transferring/full-copying means image
bits duplication, it focuses on using some method to accelerate image
bits replication and transferring between Glance/backend-storage and
Nova compute host, like P2P, FTP, HTTP, etc., but zero-copying means
there is NO any bits duplication and transferring happened between
those two sides during the image preparing for VM provisioning, so the
latter one focuses on a) how to attaching remote image volume/disk
within Glance managed backend-storage from Nova compute host directly,
and b) making the VM's root disk from remote template image  based on
such hypervisor and particular storage technology, btw c) preventing
image bits uploading for VM snapshot/capture case. They are totally
different to me. (refer: review comments in
https://review.openstack.org/#/c/84671/ )

Second, on the implementation level, I consider to put all image
handling related code into nova.image namespace sounds neat but I
think it can not work (the leak as last mail side here). IMO, the
transferring/full-copying logic is more applicable for nova.image
namespace, such transferring approach can be implemented based on
existing download module plugins structure, e.g. P2P, FTP, but for the
zero-copying, regarding to my above points of view, I consider to
implement it in nova.virt + nova.virt.hypervisor is make more sense,
since it's more related with particular hypervisor and/or storage
technology. (refer: inline comments in
https://review.openstack.org/#/c/86583/6/specs/juno/image-multiple-location.rst
)

zhiyan

On Wed, Apr 23, 2014 at 11:02 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 With a good zero-copy transfer lib, live migration support can be
 extended to non-shared storage, or cross-datacenter. It's a kind of
 value.

 Hmm, I totally see the value of doing this. Not sure that there could be
 the same kinds of liveness guarantees with non-shared-storage, but I
 am certainly happy to see a proof of concept in this area! :)

 task = image_api.copy(from_path_or_uri, to_path_or_uri)
 # do some other work
 copy_task_result = task.wait()
 +1  looks cool!
 how about zero-copying?

 It would be an implementation detail within nova.image.api.copy()
 function (and the aforementioned image bits mover library) :)

 The key here is to leak as little implementation detail out of the
 nova.image.api module

 Best,
 -jay

 At 2014-04-23 07:21:27,Jay Pipes jaypi...@gmail.com wrote:
 Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
 
 On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
  I actually support the idea Huiba has proposed, and I am thinking of
  how to optimize the large data transfer(for example, 100G in a short
  time) as well.
  I registered two blueprints in nova-specs, one is for an image upload
  plug-in to upload the image to
  glance(https://review.openstack.org/#/c/84671/), the other is a data
  transfer plug-in(https://review.openstack.org/#/c/87207/) for data
  migration among nova nodes. I would like to see other transfer
  protocols, like FTP, bitTorrent, p2p, etc, implemented for data
  transfer in OpenStack besides HTTP.
 
  Data transfer may have many use cases. I summarize them into two
  catalogs. Please feel free to comment on it.
  1. The machines are located in one network, e.g. one domain, one
  cluster, etc. The characteristic is the machines can access each other
  directly via the IP addresses(VPN is beyond consideration). In this
  case, data can be transferred via iSCSI, NFS, and definitive zero-copy
  as Zhiyan mentioned.
  2. The machines are located in different networks, e.g. two data
  centers, two firewalls, etc. The characteristic is the machines can
  not access each other directly via the IP addresses(VPN is beyond
  consideration). The machines are isolated, so they can not be
  connected with iSCSI, NFS, etc. In this case, data have to go via the
  protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
  can work for this case. Zhiyan, please help me with this doubt.
 
  I guess for data transfer, including image downloading, image
  uploading, live migration, etc, OpenStack needs to taken into account
  the above two catalogs for data transfer.
 
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 
   It is hard to say that one protocol is better than another, and one
  approach prevails another(BitTorrent is very cool, but if there is
  only one source and only one target, it would not be that faster than
  a direct FTP). The key is the use
  

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-18 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 10:52 PM, lihuiba magazine.lihu...@163.com wrote:
btw, I see but at the moment we had fixed it by network interface
device driver instead of workaround - to limit network traffic slow
down.
 Which kind of driver, in host kernel, in guest kernel or in openstack?


In compute host kernel, doesn't related with OpenStack.



There are few works done in Glance
(https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver ),
but some work still need to be taken I'm sure. There are something on
drafting, and some dependencies need to be resolved as well.
 I read the blueprints carefully, but still have some doubts.
 Will it store an image as a single volume in cinder? Or store all image

Yes

 files
 in one shared volume (with a file system on the volume, of course)?
 Openstack already has support to convert an image to a volume, and to boot
 from a volume. Are these features similar to this blueprint?

Not similar but it could be leverage for this case.



I prefer to talk this details in IRC. (And I had read all VMThunder
code at today early (my timezone), there are some questions from me as
well)

zhiyan


 Huiba Li

 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-18 12:14:25,Zhi Yan Liu lzy@gmail.com wrote:
On Fri, Apr 18, 2014 at 10:53 AM, lihuiba magazine.lihu...@163.com wrote:
It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.

 Network traffic control could help. The point is to ensure no instance
 is starved to death. Traffic control can be done with tc.


btw, I see but at the moment we had fixed it by network interface
device driver instead of workaround - to limit network traffic slow
down.



btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage
 backend.
 That sounds interesting. Is there some  more materials?


There are few works done in Glance
(https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver ),
but some work still need to be taken I'm sure. There are something on
drafting, and some dependencies need to be resolved as well.



 At 2014-04-18 06:05:23,Zhi Yan Liu lzy@gmail.com wrote:
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com
 wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no
 other

 protocol will perform better. However, P2P transferring will greatly
 reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level.
 And

 VMThunder is for working in conjunction with Cinder, not Glance.
 VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and
 caching
 is something close to CoR. In fact, we are working on a kernel module
 called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for
 sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com
 wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your
 zero-copy

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org



 Hello Yongquan Fu,

 My thoughts:

 1. Currently Nova has already supported image caching mechanism. It
 could caches the image on compute host which VM had provisioning from
 it before, and next provisioning (boot same image) doesn't need to
 transfer it again only if cache-manger clear it up.
 2. P2P transferring and prefacing is something that still based on
 copy mechanism, IMHO, zero-copy approach is better, even
 transferring/prefacing could be optimized by such approach. (I have
 not check on-demand transferring of VMThunder, but it is a kind of
 transferring as well, at last from its literal meaning).
 And btw, IMO, we have two ways can go follow zero-copy idea:
 a. when Nova and Glance use same backend storage, we could use storage
 special CoW/snapshot approach to prepare VM disk instead of
 copy/transferring image bits (through HTTP/network or local copy).
 b. without unified storage, we could attach volume/LUN to compute
 node from backend storage as a base image, then do such CoW/snapshot
 on it to prepare root/ephemeral disk of VM. This way just like
 boot-from-volume but different is that we do CoW/snapshot on Nova side
 instead of Cinder/storage side.

 For option #a, we have already got some progress:
 https://blueprints.launchpad.net/nova/+spec/image-multiple-location
 https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
 https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

 Under #b approach, my former experience from our previous similar
 Cloud deployment (not OpenStack) was that: under 2 PC server storage
 nodes (general *local SAS disk*, without any storage backend) +
 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
 VMs in a minute.

 For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
 is one of optimized approach for image transferring valuably.

 zhiyan

 On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same
 time.



 The motivation for our work is to increase the speed of provisioning vms
 for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number
 of
 virtual machine instances is very time

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no other

 protocol will perform better. However, P2P transferring will greatly reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level. And

 VMThunder is for working in conjunction with Cinder, not Glance. VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and caching
 is something close to CoR. In fact, we are working on a kernel module called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b
 of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate
 of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.


btw, I believe the case/numbers is not true as well, since remote
image bits could be loaded on-demand instead of load them all on boot
stage.

zhiyan

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!)
 can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 5:19 AM, Michael Still mi...@stillhq.com wrote:
 If you'd like to have a go at implementing this in nova's Juno
 release, then you need to create a new-style blueprint in the
 nova-specs repository. You can find more details about that process at
 https://wiki.openstack.org/wiki/Blueprints#Nova

 Some initial thoughts though, some of which have already been brought up:

  - _some_ libvirt drivers already have image caching. I am unsure if
 all of them do, I'd have to check.


Thanks for clarification.

  - we already have blueprints for better support of glance multiple
 image locations, it might be better to extend that work than to do
 something completely separate.


Totally agreed. And I think currently seems there are two places (at
least) could be leveraged:

1. Making this as an image download plug-ins for Nova, to be built-in
or independent. I prefer to go this way, but need to make sure its
context is enough for your case.
2. Making this as a built-in or independent image handler plug-ins, as
a part of supporting of multiple-image-locations (on going) as Michael
mentions here.

zhiyan

  - the xen driver already does bittorrent image delivery IIRC, you
 could take a look at how that do that.

  - pre-caching images has been proposed for libvirt for a long time,
 but never implemented. I think that's definitely something of interest
 to deployers.

 Cheers,
 Michael

 On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 10:53 AM, lihuiba magazine.lihu...@163.com wrote:
It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.

 Network traffic control could help. The point is to ensure no instance
 is starved to death. Traffic control can be done with tc.


btw, I see but at the moment we had fixed it by network interface
device driver instead of workaround - to limit network traffic slow
down.



btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage
 backend.
 That sounds interesting. Is there some  more materials?


There are few works done in Glance
(https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver ),
but some work still need to be taken I'm sure. There are something on
drafting, and some dependencies need to be resolved as well.



 At 2014-04-18 06:05:23,Zhi Yan Liu lzy@gmail.com wrote:
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no
 other

 protocol will perform better. However, P2P transferring will greatly
 reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level. And

 VMThunder is for working in conjunction with Cinder, not Glance.
 VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and caching
 is something close to CoR. In fact, we are working on a kernel module
 called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for
 sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com
 wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your
 zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option
 #b
 of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500
 ones
 require 150GB.  Each of the storage server needs to send data at a rate
 of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden
 even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running
 instances
 accessing the same storage nodes.


btw, I believe the case/numbers is not true as well, since remote
image bits could be loaded on-demand instead of load them all on boot
stage.

zhiyan

 VMThunder eliminates this problem by P2P transferring and
 on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-16 Thread Zhi Yan Liu
Hello Yongquan Fu,

My thoughts:

1. Currently Nova has already supported image caching mechanism. It
could caches the image on compute host which VM had provisioning from
it before, and next provisioning (boot same image) doesn't need to
transfer it again only if cache-manger clear it up.
2. P2P transferring and prefacing is something that still based on
copy mechanism, IMHO, zero-copy approach is better, even
transferring/prefacing could be optimized by such approach. (I have
not check on-demand transferring of VMThunder, but it is a kind of
transferring as well, at last from its literal meaning).
And btw, IMO, we have two ways can go follow zero-copy idea:
a. when Nova and Glance use same backend storage, we could use storage
special CoW/snapshot approach to prepare VM disk instead of
copy/transferring image bits (through HTTP/network or local copy).
b. without unified storage, we could attach volume/LUN to compute
node from backend storage as a base image, then do such CoW/snapshot
on it to prepare root/ephemeral disk of VM. This way just like
boot-from-volume but different is that we do CoW/snapshot on Nova side
instead of Cinder/storage side.

For option #a, we have already got some progress:
https://blueprints.launchpad.net/nova/+spec/image-multiple-location
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.

For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
is one of optimized approach for image transferring valuably.

zhiyan

On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Switching from sql_connection to [database] connection ?

2014-03-30 Thread Zhi Yan Liu
Hi

We have no plan to update sample template in I release for it, but 
https://review.openstack.org/#/c/77379/ is on reviewing, IFY.

zhiyan

Sent from my iPad

 On 2014年3月30日, at 13:04, Tom Fifield t...@openstack.org wrote:
 
 On 27/02/14 18:47, Flavio Percoco wrote:
 On 27/02/14 12:12 +0800, Tom Fifield wrote:
 Hi,
 
 As best I can tell, all other services now use this syntax for
 configuring database connections:
 
 [database]
 connection = sqlite:///etc,omg
 
 
 whereas glance appears to still use
 
 [DEFAULT]
 ...
 sql_connection = sqlite:///etc,omg
 
 
 Is there a plan to switch to the former during Icehouse development?
 
 From a user standpoint it'd be great to finally have consistency
 amoungst all the services :)
 
 It already did. It looks like the config sample needs to be updated.
 
 To be more precise, `sql_connection` is marked as deprecated.[0]
 
 [0]
 https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/session.py#L329
 
 Just noting that the sample config has still not been updated.
 
 
 Regards,
 
 
 Tom
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-18 Thread Zhi Yan Liu
Hi Doug,

On Wed, Mar 19, 2014 at 6:08 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:



 On Mon, Mar 10, 2014 at 4:02 PM, Ben Nemec openst...@nemebean.com wrote:

 On 2014-03-10 12:24, Chris Friesen wrote:

 Hi,

 I'm using havana and recent we ran into an issue with heat related to
 character sets.

 In heat/db/sqlalchemy/api.py in user_creds_get() we call
 _decrypt() on an encrypted password stored in the database and then
 try to convert the result to unicode.  Today we hit a case where this
 errored out with the following message:

 UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
 invalid continuation byte

 We're using postgres and currently all the databases are using
 SQL_ASCII as the charset.

 I see that in icehouse heat will complain if you're using mysql and
 not using UTF-8.  There doesn't seem to be any checks for other
 databases though.

 It looks like devstack creates most databases as UTF-8 but uses latin1
 for nova/nova_bm/nova_cell.  I assume this is because nova expects to
 migrate the db to UTF-8 later.  Given that those migrations specify a
 character set only for mysql, when using postgres should we explicitly
 default to UTF-8 for everything?

 Thanks,
 Chris


 We just had a discussion about this in #openstack-oslo too.  See the
 discussion starting at 2014-03-10T16:32:26
 http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log

 While it seems Heat does require utf8 (or at least matching character
 sets) across all tables, I'm not sure the current solution is good.  It
 seems like we may want a migration to help with this for anyone who might
 already have mismatched tables.  There's a lot of overlap between that
 discussion and how to handle Postgres with this, I think.

 I don't have a definite answer for any of this yet but I think it is
 something we need to figure out, so hopefully we can get some input from
 people who know more about the encoding requirements of the Heat and other
 projects' databases.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Based on the discussion from the project meeting today [1], the Glance team
 is going to write a migration to fix the database as the other projects have
 (we have not seen issues with corrupted data, so we believe this to be
 safe). However, there is one snag. In a follow-up conversation with Ben in
 #openstack-oslo, he pointed out that no migrations will run until the
 encoding is correct, so we do need to make some changes to the db code in
 oslo.


This is exactly right and that's why I proposed
https://review.openstack.org/#/c/75356/ up.

 Here's what I think we need to do:

 1. In oslo, db_sync() needs a boolean to control whether
 _db_schema_sanity_check() is called. This is an all-or-nothing flag (not the
 for some tables implementation that was proposed).


I'd like to use https://review.openstack.org/#/c/75356/ to handle this.
Doug it will be cool if you like remove -2 from it, thanks.

 2. Glance needs a migration to change the encoding of their tables.


I'm going to to use https://review.openstack.org/#/c/75898/ to cover this.

 3. In glance-manage, the code that calls upgrade migrations needs to look at
 the current state and figure out if the requested state is before or after
 the migration created in step 2. If it is before, it passes False to disable
 the sanity check. If it is after, it passes True to enforce the sanity
 check.


I will use https://review.openstack.org/#/c/75865/ to take this.
And what do you think if I expose sanity-check-skipping flag to glance
deployer instead of do it in db_sync internal?
I think it will be more flexible to help deployer get the correct
finial DB migration target as he needed.

thanks,
zhiyan

 Ben, did I miss any details?

 Doug

 [1]
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-03-18-21.03.log.txt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-13 Thread Zhi Yan Liu
+1. Nice work Arnaud!

On Thu, Mar 13, 2014 at 5:09 PM, Flavio Percoco fla...@redhat.com wrote:
 On 12/03/14 19:19 -0700, Mark Washenberger wrote:

 Hi folks,

 I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
 cycle
 his reviews have been consistently high quality and I feel confident in
 his
 ability to assess the design of new features and the overall direction for
 Glance.

 If anyone has any concerns, please share them with me. If I don't hear
 any,
 I'll make the membership change official in about a week.

 Thanks for your consideration. And thanks for all your hard work, Arnaud!


 +1

 Thanks Arnaud.


 markwash


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Zhi Yan Liu
Jay thanks your correct analysis and quick fix.

zhiyan

On Wed, Mar 12, 2014 at 4:11 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:

 On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:
  On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague s...@dague.net wrote:
  On 03/07/2014 11:16 AM, Russell Bryant wrote:
  On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
  On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
  I'd Like to request A FFE for the remaining patches in the Ephemeral
  RBD image support chain
 
  https://review.openstack.org/#/c/59148/
  https://review.openstack.org/#/c/59149/
 
  are still open after their dependency
  https://review.openstack.org/#/c/33409/ was merged.
 
  These should be low risk as:
  1. We have been testing with this code in place.
  2. It's nearly all contained within the RBD driver.
 
  This is needed as it implements an essential functionality that has
  been missing in the RBD driver and this will become the second release
  it's been attempted to be merged into.
 
  Add me as a sponsor.
 
  OK, great.  That's two.
 
  We have a hard deadline of Tuesday to get these FFEs merged (regardless
  of gate status).
 
 
  As alt release manager, FFE approved based on Russell's approval.
 
  The merge deadline for Tuesday is the release meeting, not end of day.
  If it's not merged by the release meeting, it's dead, no exceptions.
 
  Both commits were merged, thanks a lot to everyone who helped land
  this in Icehouse! Especially to Russel and Sean for approving the FFE,
  and to Daniel, Michael, and Vish for reviewing the patches!
 

 There was a bug reported today [1] that looks like a regression in this
 new code, so we need people involved in this looking at it as soon as
 possible because we have a proposed revert in case we need to yank it
 out [2].

 [1] https://bugs.launchpad.net/nova/+bug/1291014
 [2]
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z

 Note that I have identified the source of the problem and am pushing a
 patch shortly with unit tests.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Zhi Yan Liu
+1! according to the low rise and the usefulness for the real cloud deployment.

zhiyan

On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward xar...@gmail.com wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain

 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/

 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.

 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.

 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.

 Andrew
 Mirantis
 Ceph Community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Status of multi-attach-volume work

2014-03-05 Thread Zhi Yan Liu
Hi,

We decided multi-attach feature must be implemented as an extension to
core functionality in Cinder, but currently we have not a clear
extension support in Cinder, IMO it's the biggest blocker now. And the
other issues have been listed at
https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume#Comments_and_Discussion
as well. Probably we could get more inputs from Cinder cores.

thanks,
zhiyan

On Wed, Mar 5, 2014 at 8:19 PM, Niklas Widell
niklas.wid...@ericsson.com wrote:
 Hi
 What is the current status of the work around multi-attach-volume [1]? We
 have some cluster related use cases that would benefit from being able to
 attach a volume from several instances.

 [1] https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

 Best regards
 Niklas Widell
 Ericsson AB

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Zhi Yan Liu
I have a related BP:
https://blueprints.launchpad.net/glance/+spec/image-location-selection-strategy

IMO, as I mentioned in its description, it can be applied in glance
and consumer (glance's client) both sides: in glance internal, this
can be used for image-download handling and direct_url_ selection
logic; For consumer side, like Nova, it can be used to select
efficient image storage for particular compute node, actually it allow
customer/ISV implement their own strategy plugin.

And we have a near plan, as flaper87 mentioned above, I believe we
will separate glance store code *and*
image-location-selection-strategy stuff to an independent package
under glance project, at that time we can change Nova to leverage it,
and admin/operator can via selection-strategy related options to
configure Nova but Glance (agree to Jay on this point)

zhiyan

On Tue, Feb 4, 2014 at 12:04 AM, Flavio Percoco fla...@redhat.com wrote:
 On 03/02/14 10:13 -0500, Jay Pipes wrote:

 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:

 IMHO, the bit that should really be optimized is the selection of the
 store nodes where the image should be downloaded from. That is,
 selecting the nearest location from the image locations and this is
 something that perhaps should happen in glance-api, not nova.


 I disagree. The reason is because glance-api does not know where nova
 is. Nova does.


 Nova doesn't know where glance is either. More info is required in
 order to finally do something smart here. Not sure what the best
 approach is just yet but as mentioned in my previous email I think
 focusing on the stores for now is the thing to do. (As you pointed out
 bellow too).



 I continue to think that the best performance gains will come from
 getting rid of glance-api entirely, putting the block-streaming bits
 into a separate Python library, and having Nova and Cinder pull
 image/volume bits directly from backend storage instead of going through
 the glance middleman.



 This is exactly what we're doing by pulling glance.store into its own
 library. I'm working on this myself. We are not completely getting rid
 of glance-api but we're working on not depending on it to get the
 image data.

 Cheers,
 flaper



 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]A question abount the x-image-meta-property parameter.

2014-01-26 Thread Zhi Yan Liu
One quick question, do you think there is a potential backward-capability 
breaking issue for end user or ISV if we remove existing 
standardizing/normalizing logic? I'd like to know the gains/advantages on this 
fix.

thanks,
zhiyan

Sent from my iPad

 On 2014年1月26日, at 23:49, Fei Long Wang flw...@cn.ibm.com wrote:
 
 Hey Jay, thanks for calling my name correctly :) 
 
 Wang Hong, feel free to open a bug to track this. And we can get more 
 info/comments when the patch is reviewed. Thanks.
 
 
 Thanks  Best regards,
 Fei Long Wang (王飞龙)
 -
 Tech Lead of Nitrogen (SME team)
 Cloud Solutions and OpenStack Development
 Tel: 8610-82450513 | T/L: 905-0513 
 Email: flw...@cn.ibm.com
 China Systems  Technology Laboratory in Beijing
 -
 
 
 graycol.gifJay Pipes ---01/26/2014 11:20:40 PM---On Sun, 2014-01-26 at 
 18:48 +0800, Fei Long Wang wrote:  Hi Wang Hong,
 
 From: Jay Pipes jaypi...@gmail.com
 To:   Fei Long Wang/China/IBM@IBMCN, 
 Cc:   OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 01/26/2014 11:20 PM
 Subject:  Re: [openstack-dev] [glance]A question abount the 
 x-image-meta-property parameter.
 
 
 
 On Sun, 2014-01-26 at 18:48 +0800, Fei Long Wang wrote:
  Hi Wang Hong,
  
  Good catch. I think the issue is caused by line 244-246, see
  https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244 
  For the case-matter issue, I think it's a bug. But as for the - to _, I 
  would like to listen Jay's opinion since who is the original author. And 
  obviously, it's intentional change.
 
 Hi Wang Hong adnd Fei Long,
 
 It's been a long time since I wrote that :) To be honest, I'm not sure
 why -- other than just standardizing/normalizing the input, we did that.
 Perhaps it had something to do with vendor-specific properties that had
 a prefix that used hyphens, but I'm really not sure... perhaps I am
 getting too old :)
 
 Best,
 -jay
 
  Inactive hide details for 王宏 ---01/26/2014 05:14:02 PM---Hi all. If
  I use the following command to create an image:王宏 ---01/26/2014
  05:14:02 PM---Hi all. If I use the following command to create an
  image:
  
  From: 王宏 w.wangho...@gmail.com
  To: openstack-dev@lists.openstack.org, 
  Date: 01/26/2014 05:14 PM
  Subject: [openstack-dev] [glance]A question abount the
  x-image-meta-property parameter.
  
  
  
  __
  
  
  
  Hi all.
  
  If I use the following command to create an image: 
  curl -i -H X-Auth-Token:268c536db05b435bb6e631158744e3f6 -H
  x-image-meta-property-IMAGE-TYPE:xxx -H x-image-meta-name:test -X
  POST http://127.0.0.1:9292/v1/images
  
  I will get the following results:
  {image: {status: queued, deleted: false, container_format:
  null, min_ram: 0, updated_at: 2014-01-26T08:51:54, owner:
  9a38c1cda5344dd288331b988739c834, min_disk: 0, is_public: false,
  deleted_at: null, id: 696ab97d-0e6f-46f1-8570-b6db707a748b,
  size: 0, name: test, checksum: null, created_at:
  2014-01-26T08:51:54, disk_format: null, properties:
  {image_type: xxx}, protected: false}}
  
  The capital letters in the property will be converted to lowercase
  letters and - will be converted to _(IMAGE-TYPE=image_type).
  
  Is it a bug? Thanks.
  
  Best regards.
  wanghong___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Is the 'killed' state ever set in v2?

2014-01-26 Thread Zhi Yan Liu
FSM +1

https://blueprints.launchpad.net/glance/+spec/image-status-global-state-machine

btw, I remember I had posted this information in your change as a
review comment, which is a simple state validation mechanism in image
domain object and already been merged.

thanks,
zhiyan


On Mon, Jan 27, 2014 at 8:37 AM, David Koo kpublicm...@gmail.com wrote:

 Perhaps there is still a bug where an image is getting stuck in 'saving' or
 some other state when a PUT fails?

 Yes, that's precisely the problem.

 Of course, one could argue that that if an upload fails the user
 should be able to continue trying until the upload succeeds! But in that
 case the image status should probably be reset to queued rather than
 stay at saving.

 But this makes me a little uneasy because our
 consistency/concurrency handling seems a little weak at the moment (am I
 right?). If we were to have a more complicated state machine then we
 would need much stronger consistency guarantees when there are
 simultaneous uploads in progress (where some fail and some succeed)!

 Is there any work on this (concurrency/consistency) front? I
 remember seeing some patches related to caching of simultaneous
 downloads of an image file where issues related to concurrent update of
 image metadata were addressed but IIRC it was -1ed because it reduced
 concurrency.

 So do we bring back the 'killed' state or should we shoot for a more
 complicated/powerful state machine?

 --
 Koo


 On Sun, Jan 26, 2014 at 06:36:36AM -0800, Mark Washenberger wrote:
 It does not seem very ReSTful--or very usable, for that matter--for a
 resource to be permanently modified when you a PUT fails. So I don't think
 we need the 'killed' status. It was purposefully left out of v2 images,
 which is not just a reskin of v1.

 Perhaps there is still a bug where an image is getting stuck in 'saving' or
 some other state when a PUT fails?


 On Sun, Jan 26, 2014 at 5:10 AM, David Koo kpublicm...@gmail.com wrote:

 
  Hi Fei,
 
  Thanks for the confirmation.
 
   I think you're right. The 'killed' status should be set in method
  upload()
   if there is an upload failure, see
  
  https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244
 
  I think you meant:
 
 
  https://github.com/openstack/glance/blob/master/glance/api/v1/upload_utils.py#L244
 
  (the safe_kill() call) right?
 
  --
  Koo
 
 
   -- Original --
   From:  David Kookpublicm...@gmail.com;
   Date:  Jan 26, 2014
   To:  OpenStack Development Mailing
   Listopenstack-dev@lists.openstack.org;
   Subject:  [openstack-dev] [Glance] Is the 'killed' state ever set in v2?
  
   Hi All,
  
   While trying to work on a bug I was trying to simulate some image
   download failures and found that apparently the 'killed' state is never
   set using v2 APIs.
  
   If I understand correctly, a file upload goes to
   api.v2.image_data.ImageDataController.upload and goes all the way to
   store.ImageProxy.set_data which proceeds to write to the backend store.
  
   If the backend store raises an exception it is simply propagated all the
   way up. The notifier re-encodes the exceptions (which is the bug I was
   looking at) but doesn't do anything about the image status.
  
   Nowhere does the image status seem to get set to 'killed'.
  
   Before I log a bug I just wanted to confirm with everybody whether or
   not I've missed out on something.
  
   Thanks.
  
   --
   Koo
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]A question abount the x-image-meta-property parameter.

2014-01-26 Thread Zhi Yan Liu
@flwang, np at all. We can put the input in bug report as a comment or
here anyway,
as far as it can let us/me know the gains/advantages on this fix.

I'd like to know whether there is a backward-compatibility issue, but seems
it's not a big deal? To keep the image property be case-sensitive is make
sense, at last it is better if we keep same property name as client
provided.

zhiyan

On Mon, Jan 27, 2014 at 11:54 AM, Fei Long Wang flw...@cn.ibm.com wrote:

 Zhi Yan,

 I think you're talking about backward-compatibility, right? I would say
 it's possible, just like the v2 is not compatible with v1 against this. So
 that's why I would like to see a bug is opened to track this and then we
 get also some comments from the end user/product manager perspective to
 decide to mark it as won't fix or go ahead.

 Thanks  Best regards,
 Fei Long Wang (王飞龙)
 -
 Tech Lead of Nitrogen (SME team)
 Cloud Solutions and OpenStack Development
 Tel: 8610-82450513 | T/L: 905-0513
 Email: flw...@cn.ibm.com
 China Systems  Technology Laboratory in Beijing
 -


 [image: Inactive hide details for Zhi Yan Liu ---01/27/2014 01:02:40
 AM---One quick question, do you think there is a potential backwar]Zhi
 Yan Liu ---01/27/2014 01:02:40 AM---One quick question, do you think there
 is a potential backward-capability breaking issue for end use

 From: Zhi Yan Liu lzy@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 01/27/2014 01:02 AM

 Subject: Re: [openstack-dev] [glance]A question abount the
 x-image-meta-property parameter.
 --



 One quick question, do you think there is a potential backward-capability
 breaking issue for end user or ISV if we remove existing
 standardizing/normalizing logic? I'd like to know the gains/advantages on
 this fix.

 thanks,
 zhiyan

 Sent from my iPad

 On 2014年1月26日, at 23:49, Fei Long Wang 
 *flw...@cn.ibm.com*flw...@cn.ibm.com
 wrote:


Hey Jay, thanks for calling my name correctly :)

Wang Hong, feel free to open a bug to track this. And we can get more
info/comments when the patch is reviewed. Thanks.


Thanks  Best regards,
Fei Long Wang (王飞龙)
-
Tech Lead of Nitrogen (SME team)
Cloud Solutions and OpenStack Development
Tel: 8610-82450513 | T/L: 905-0513
Email: *flw...@cn.ibm.com* flw...@cn.ibm.com
China Systems  Technology Laboratory in Beijing
-


graycol.gifJay Pipes ---01/26/2014 11:20:40 PM---On Sun, 2014-01-26
at 18:48 +0800, Fei Long Wang wrote:  Hi Wang Hong,

From: Jay Pipes *jaypi...@gmail.com* jaypi...@gmail.com
To: Fei Long Wang/China/IBM@IBMCN,
Cc: OpenStack Development Mailing List (not for usage questions) 
*openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org

Date: 01/26/2014 11:20 PM
Subject: Re: [openstack-dev] [glance]A question abount the
x-image-meta-property parameter.

--



On Sun, 2014-01-26 at 18:48 +0800, Fei Long Wang wrote:
 Hi Wang Hong,

 Good catch. I think the issue is caused by line 244-246, see


 *https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244*https://github.com/openstack/glance/blob/master/glance/common/utils.py#L244
  For
the case-matter issue, I think it's a bug. But as for the - to _, I would
like to listen Jay's opinion since who is the original author. And
obviously, it's intentional change.

Hi Wang Hong adnd Fei Long,

It's been a long time since I wrote that :) To be honest, I'm not sure
why -- other than just standardizing/normalizing the input, we did
that.
Perhaps it had something to do with vendor-specific properties that had
a prefix that used hyphens, but I'm really not sure... perhaps I am
getting too old :)

Best,
-jay

 Inactive hide details for 王宏 ---01/26/2014 05:14:02 PM---Hi all. If
 I use the following command to create an image:王宏 ---01/26/2014
 05:14:02 PM---Hi all. If I use the following command to create an
 image:

 From: 王宏 *w.wangho...@gmail.com* w.wangho...@gmail.com
 To: 
 *openstack-dev@lists.openstack.org*openstack-dev@lists.openstack.org,

 Date: 01/26/2014 05:14 PM
 Subject: [openstack-dev] [glance]A question abount the
 x-image-meta-property parameter.




__



 Hi all.

 If I use the following command to create an image:
 curl -i -H X-Auth-Token

Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Zhi Yan Liu
On Mon, Dec 23, 2013 at 10:26 PM, Flavio Percoco fla...@redhat.com wrote:
 On 23/12/13 09:00 -0500, Jay Pipes wrote:

 On 12/23/2013 08:48 AM, Mark Washenberger wrote:




 On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

On 12/23/2013 05:42 AM, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/12/13 00:41 -0500, Jay Pipes wrote:

Cinder is for block storage. Images are just a bunch of
blocks, and
all the store drivers do is take a chunked stream of
input blocks and
store them to disk/swift/s3/rbd/toaster and stream those
blocks back
out again.

So, perhaps the most appropriate place for this is in
Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it
to be
under glance for historical reasons and because Glance team
knows that
code.

How would it work if this lib falls under Block Storage
 program?

Should the glance team be added as core contributors of this
project?
or Just some of them interested in contributing / reviewing
those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh
in too.


Programs are a team of people on a specific mission. If the
stores code
is maintained by a completely separate group (glance devs), then
 it
doesn't belong in the Block Storage program... unless the Cinder
devs
intend to adopt it over the long run (and therefore the
contributors of
the Block Storage program form a happy family rather than two
separate
groups).


Understood. The reason I offered this up as a suggestion is that
currently Cinder uses the Glance REST API to store and retrieve
volume snapshots, and it would be more efficient to just give Cinder
the ability to directly retrieve the blocks from one of the
underlying store drivers (same goes for Nova's use of Glance).
...and, since the glance.store drivers are dealing with blocks, I
thought it made more sense in Cinder.


 True, Cinder and Nova should be talking more directly to the underlying
 stores--however their direct interface should probably be through
 glanceclient. (Glanceclient could evolve to use the glance.store code I
 imagine.)


 Hmm, that is a very interesting suggestion. glanceclient containing the
 store drivers. I like it. Will be a bit weird, though, having the
 glanceclient call the Glance API server to get the storage location details,
 which then calls the glanceclient code to store/retrieve the blocks :)


 Exactly. This is part of the original idea. Allow Glance, nova,
 glanceclient and cinder to interact with the store code.


Actually I consider this Glance store stuff can be packaged to a
dedicated common lib belongs to Glance, maybe we can put it into
glanceclient if we don't like create a new sub-lib, IMO it worked just
like current Cinder's brick lib IMO, in sort term.

In long term we can move those stuff all to oslo when they stable
enough (if we can see that day ;) ) and don't organize them by
project's POV but storage type: oslo.blockstore (or other name) for
block storage backend handling, and oslo.objectstore for object
storage, and upper layer project just delegate all real storage device
operation requests to those lib, like mount/attach, unmoun/detach,
read/write..

zhiyan


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread Zhi Yan Liu
Hello John,

04:00 or 05:00 UTC works for me too.

On Tue, Dec 17, 2013 at 12:05 PM, John Griffith
john.griff...@solidfire.com wrote:
 On Mon, Dec 16, 2013 at 8:57 PM, 赵钦 chaoc...@gmail.com wrote:
 Hi John,

 I think the current meeting schedule, UTC 16:00, basically works for China
 TZ (12AM), although it is not perfect. If we need to reschedule, I think UTC
 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our lunch
 time.


 On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
 john.griff...@solidfire.com wrote:

 Hi All,

 Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
 some interest in either changing the weekly Cinder meeting time, or
 proposing a second meeting to accomodate folks in other time-zones.

 A large number of folks are already in time-zones that are not
 friendly to our current meeting time.  I'm wondering if there is
 enough of an interest to move the meeting time from 16:00 UTC on
 Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
 willing to look at either moving the meeting for a trial period or
 holding a second meeting to make sure folks in other TZ's had a chance
 to be heard.

 Let me know your thoughts, if there are folks out there that feel
 unable to attend due to TZ conflicts and we can see what we might be
 able to do.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Hi Chaochin,

 Thanks for the feedback, I think the alternate time would have to be
 moved up an hour or two anyway (between the lunch hour in your TZ and
 the fact that it just moves the problem of being at midnight to the
 folks in US Eastern TZ).  Also, I think if there is interest that a
 better solution might be to implement something like the Ceilometer
 team does and alternate the time each week.

Agreed, like Glance team also.

zhiyan


 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] How to handle simple janitorial tasks?

2013-11-28 Thread Zhi Yan Liu
Hi Koo,

On Fri, Nov 29, 2013 at 9:15 AM, David koo david@huawei.com wrote:
 Hi All,

 A quick question about simple janitorial tasks ...

 I noticed that glance.api.v2.image_data.ImageDataController.upload has two
 identical except clauses (circa line 98):
 except exception.StorageFull as e:
 msg = _(Image storage media is full: %s) % e
 LOG.error(msg)
 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
   request=req)

 except exception.StorageFull as e:
 msg = _(Image storage media is full: %s) % e
 LOG.error(msg)
 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
   request=req)

 Obviously one of the except clauses can be removed (or am I missing
 something glaringly obvious?) - I shall be happy to do that but should I first
 raise some kind of bug or should I directly commit a fix or should I bring 
 up
 such simple janitorial tasks to the mailing list here on a case-by-case basis
 for discussion first?


eagle-eyed man, I think it's a defect. I prefer you file a bug report
first then to prepare patch. (and put the bug id into the commit
message)

Actually reviewers can give you some valuable message when they look
your patch, and you can discussing them in team room within IRC if you
needed. ML is a good place but it has some delay than IRC, I think
simple questions can be talked in Gerrit or IRC directly but IMO ML is
better for complicated topic or you want to get more feedback cross
different project. And if you consider those topic which has epic
effect change you can involve etherpad or wiki also.

zhiyan

 I do realize that the definition of simple can vary from person to 
 person
 and so (ideally) such cases should perhaps should be brought to the list for
 discussion first. But I also worry about introducing noise into the list.

 --
 Koo
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] How to handle simple janitorial tasks?

2013-11-28 Thread Zhi Yan Liu
https://bugs.launchpad.net/bugs/1256207

On Fri, Nov 29, 2013 at 1:08 PM, Zhi Yan Liu lzy@gmail.com wrote:
 Hi Koo,

 On Fri, Nov 29, 2013 at 9:15 AM, David koo david@huawei.com wrote:
 Hi All,

 A quick question about simple janitorial tasks ...

 I noticed that glance.api.v2.image_data.ImageDataController.upload has 
 two
 identical except clauses (circa line 98):
 except exception.StorageFull as e:
 msg = _(Image storage media is full: %s) % e
 LOG.error(msg)
 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
   request=req)

 except exception.StorageFull as e:
 msg = _(Image storage media is full: %s) % e
 LOG.error(msg)
 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
   request=req)

 Obviously one of the except clauses can be removed (or am I missing
 something glaringly obvious?) - I shall be happy to do that but should I 
 first
 raise some kind of bug or should I directly commit a fix or should I bring 
 up
 such simple janitorial tasks to the mailing list here on a case-by-case basis
 for discussion first?


 eagle-eyed man, I think it's a defect. I prefer you file a bug report
 first then to prepare patch. (and put the bug id into the commit
 message)

 Actually reviewers can give you some valuable message when they look
 your patch, and you can discussing them in team room within IRC if you
 needed. ML is a good place but it has some delay than IRC, I think
 simple questions can be talked in Gerrit or IRC directly but IMO ML is
 better for complicated topic or you want to get more feedback cross
 different project. And if you consider those topic which has epic
 effect change you can involve etherpad or wiki also.

 zhiyan

 I do realize that the definition of simple can vary from person to 
 person
 and so (ideally) such cases should perhaps should be brought to the list for
 discussion first. But I also worry about introducing noise into the list.

 --
 Koo
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-27 Thread Zhi Yan Liu
Yes agreed with Sean, make compatible with both iso8601 is overcomplicated.
This is my abandoned try: https://review.openstack.org/#/c/53186/

zhiyan

On Wed, Nov 27, 2013 at 8:49 PM, Sean Dague s...@dague.net wrote:
 The problem is you can't really support both iso8601 was dormant for
 years, and the revived version isn't compatible with the old version.
 So supporting both means basically forking iso8601 and maintaining you
 own version of it monkey patched in your own tree.

 On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang
 yaguang.t...@canonical.com wrote:
 after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests,
 because stable/glance requires  iso8601=0.1.4, log info
 https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console,
 I have filed a bug to track this
 https://bugs.launchpad.net/glance/+bug/1255419.


 2013/11/26 Thomas Goirand z...@debian.org

 I'm sorry to restart this topic.

 I don't mind if we upgrade to 0.1.8, but then I will need to have
 patches for Havana to support version 0.1.8. Otherwise, it's going to be
 very difficult on the packaging side: I will need to upload 0.1.8 for
 Icehouse, but then it will break everything else (eg: Havana) that is
 currently in Sid.

 Was there some patches already for that? If so, please point to them so
 that I can cherry-pick them, and carry the patches in the Debian
 packages (it doesn't have to be backported to the Havana branch, I'm
 fine keeping the patches in the packages, if at least they are
 identified).

 Is there a way that I can grep all commits in Gerrit, to see if there
 was such patches committed recently?

 Cheers,

 Thomas Goirand (zigo)

 On 10/24/2013 09:37 PM, Morgan Fainberg wrote:
  It seems like adopting 0.1.8 is the right approach. If it doesn't work
  with other projects, we should work to help those projects get updated
  to work with it.
 
  --Morgan
 
  On Thursday, October 24, 2013, Zhi Yan Liu wrote:
 
  Hi all,
 
  Adopt 0.1.8 as iso8601 minimum version:
  https://review.openstack.org/#/c/53567/
 
  zhiyan
 
  On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews
  dolph.math...@gmail.com javascript:; wrote:
  
   On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins
  robe...@robertcollins.net javascript:;
   wrote:
  
   On 24 October 2013 07:34, Mark Washenberger
   mark.washenber...@markwash.net javascript:; wrote:
Hi folks!
   
1) Adopt 0.1.8 as the minimum version in
  openstack-requirements.
2) Do nothing (i.e. let Glance behavior depend on iso8601 in
  this way,
and
just fix the tests so they don't care about these extra
  formats)
3) Make Glance work with the added formats even if 0.1.4 is
  installed.
  
   I think we should do (1) because both (2) will permit surprising,
   nonobvious changes in behaviour and (3) is just nasty
  engineering.
   Alternatively, add a (4) which is (2) with whinge on startup if
  0.1.4
   is installed to make identifying this situation easy.
  
  
   I'm in favor of (1), unless there's a reason why 0.1.8 not viable
  for
   another project or packager, in which case, I've never heard the
  term
   whinge before so there should definitely be some of that.
  
  
  
   The last thing a new / upgraded deployment wants is something
  like
   nova, or a third party API script failing in nonobvious ways with
  no
   breadcrumbs to lead them to 'upgrade iso8601' as an answer.
  
   -Rob
  
   --
   Robert Collins rbtcoll...@hp.com javascript:;
   Distinguished Technologist
   HP Converged Cloud
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   --
  
   -Dolph
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Tang Yaguang

 Canonical Ltd. | www.ubuntu.com | www.canonical.com
 Mobile:  +86 152 1094 6968
 gpg key: 0x187F664F

Re: [openstack-dev] [Glance] Summit Session Summaries

2013-11-16 Thread Zhi Yan Liu
Awesome Mark! It's really useful to help us start next works to
organize, prioritize and realize icehouse BPs, thanks.

Seems here is a good place to continue our discussion around my
proposal so I have added some inline replies to Enhancing Image
Locations section, you know 50 mins is a little tight to me :) I have
a lot of point try to address.

On Sat, Nov 16, 2013 at 8:29 AM, Yongsheng Gong gong...@unitedstack.com wrote:
 great, thanks


 On Sat, Nov 16, 2013 at 5:10 AM, Mark Washenberger
 mark.washenber...@markwash.net wrote:

 Hi folks,

 My summary notes from the OpenStack Design Summit Glance sessions follow.
 Enjoy, and please help correct any misunderstandings.



 Image State Consistency:
 

 https://etherpad.openstack.org/p/icehouse-summit-image-state-consistency

 In this session, we focused on the problem of snapshots that fail
 after the image is created but before the image data is uploaded
 result in a pending image that will never become active, and the
 only operation nova can do is to delete the image. Thus there is
 not a very good way to communicate the failure to users without
 just leaving a useless image record around.

 A solution was proposed to allow Nova to directly set the status
 of the image, say to killed or some other state.

 A problem with the proposed solution is that we generally have
 kept the status field internally controlled by glance, which
 means there are some modeling and authorization concerns.
 However, it is actually something Nova could do today through
 the hacky mechanism of initiating a PUT with data, but then
 terminating the connection without sending a complete body. So
 the authorization aspects are not really a fundamental concern.

 It was suggested that the solution to this problem
 is to make Nova responsible for reporting these failures rather
 than Glance. In the short term, we could do the following
  - have nova delete the image when snapshot fails (already merged)
  - merge nova patch to report the failure as part of instance
error reporting

 In the longer term, it was seen as desirable for nova to treat
 snapshots as asynchronous tasks and reflect those tasks in the
 api, including the failure/success of those tasks.

 Another long term option that was viewed mostly favorably was
 to add another asynchronous task to glance for vanilla uploads
 so that nova snapshots can avoid creating the image until it
 is fully active.

 Fei Long Wang is going to follow up on what approach makes the
 most sense for Nova and report back for our next steps.



 What to do about v1?
 

 https://etherpad.openstack.org/p/icehouse-summit-images-v1-api

 In this discussion, we hammered out the details for how to drop
 the v1 api and in what timetable.

 Leaning heavily on cinder's experience dropping v1, we came
 up with the following schedule.

 Icehouse:
 - Announce plan to deprecate the V1 API and registry in J and remove
 it in K
 - Announce feature freeze for v1 API immediately
 - Make sure everything in OpenStack is using v2 (cinder, nova, ?)
 - Ensure v2 is being fully covered in tempest tests
 - Ensure there are no gaps in the migration strategy from v1 to v2
 - after the fact, it seems to me we need to produce a migration
 guide as a way to evaluate the presence of such gaps
 - Make v2 the default in glanceclient
 - Turn v2 on by default in glance API

 J:
 - Mark v1 as deprecated
 - Turn v1 off by default in config

 K:
 - Delete v1 api and v1 registry


 A few gotchas were identified, in particular, a concern was raised
 about breaking stable branch testing when we switch the default in
 glanceclient to v2--since latest glanceclient will be used to test
 glance  in say Folsom or Grizzly where the v2 api didn't really
 work at all.

 In addition, it was suggested that we should be very aggressive
 in using deprecation warnings for config options to communicate
 this change as loudly as possible.




 Image Sharing
 -

 https://etherpad.openstack.org/p/icehouse-summit-enhance-v2-image-sharing

 This session focused on the gaps between the current image sharing
 functionality and what is needed to establish an image marketplace.

 One issue was the lack of verification of project ids when sharing an
 image.

 A few other issues were identified:
 - there is no way to share an image with a large number of projects in a
 single api operation
 - membership lists are not currently paged
 - there is no way to share an image with everyone, you must know each
 other project id

 We identified a potential issue with bulk operations and
 verification--namely there is no way to do bulk verification of project ids
 in keystone that we know of, so probably keystone work would be needed to
 have both of these features in place without implying super slow api calls.

 In addition, we spent some time toying with the idea of image catalogs. If
 publishers put 

[openstack-dev] [Glance] multi-hypervisor support

2013-11-13 Thread Zhi Yan Liu
Hello folks,

I proposed a session on this summit [1], one item of it is to enhance
Glance multi-location feature to support multi-hypervisor deployment
environment, for example in this scenario KVM images could potentially
be stored on GlusterFS supported for KVM compute nodes whereas ESX
images could be stored in a NFS storage. This enhancement potentially
need to change Glance to allow it save different image content to
different backend storage, currently Glance only allow all multiple
locations for an image id are all contain same bits. Since change and
effect is large for Glance so I have to make sure this is a worth use
case/feature enough for the real Cloud, I'd like to listen your
thoughts around this idea.

[1] https://etherpad.openstack.org/p/enhancing-glance-image-location-property

thanks,
zhiyan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] async workers interface design.

2013-10-17 Thread Zhi Yan Liu
Hi Nikhil,

On Fri, Oct 18, 2013 at 7:55 AM, Nikhil Komawar
nikhil.koma...@rackspace.com wrote:
 Hi all,

 There seem to be varying ideas about the worker interface design for our
 blueprint async-glance-workers. Fei and Zhi already have posted detailed
 comments on https://review.openstack.org/#/c/46117/21 (PS 21). Venkatesh,
 Alex and I have worked on it at varying periods of times and brainstormed
 different ideas as well. I know Zhi is also, very interested in the design
 and making it as configurable as possible.



 Recently, a patch has been uploaded to
 https://review.openstack.org/#/c/46117 giving a direction to the interface
 we'r hoping to achieve. The following are some of the high level pros and
 cons to this approach which we would like to discuss in the upcoming sync
 up(s) on #openstack-glance :-



 Pros:

 Establishes clear distinction between an executor and a script (or script
 package)

 eg. executor can be of type eventlet or rpc and script  (or script package)
 can be of type import/export/clone

 Establishes a good understanding of the contract between Glance and script
 packages
 Gives the deployer the ability to keep their script packages as simple as
 possible or even add complexity to them if needed without marrying such
 script packages with Glance code.
 A standard way of invoking asynchronous tasks via Glance.

 Cons:

 Not all script packages would be supported by Glance.
 There would be no modules which include classes like TaskImportScript,
 TaskExportScript within Glance domain logic- if they are needed for some
 sort of inheritance. However, the script packages themselves can have them
 and be as extensible as it gets.

 (Trying to use the word script packages as I'm trying to help understand
 that they would even be configurable using stevedore and be something
 similar to the example package given here. If using stevedore, they would
 have their own setup file and namespace defined.)



 All the script_packages would be having a standard module say named as
 main - which Glance task executor calls and initiates the execution.
 (main can be analogous to the python script given in the above example or
 can even be a simple script which runs the task)  Glance provides access to
 it's code on contractual basis - viz. the params that would be passed in are
 defined at the executor level. For example - in case of import we seem to
 need db_api, store_api, notifier_api, task_id, req.context. These can be
 used to invoke calls from within Glance modules like sqlalchemy/api or
 notifier/api etc. From this point on, it's responsibility of the script to
 achieve the result it needs. This design seem to fit in the requirements
 which many people have posted in the review comments without adding too many
 configuration options within glance; keeping it maintainable as well.



 Another idea about the interface design which Zhi layed-out is linked below.
 I have browsed through it however, am not completely familiar with the
 design's end goal. Would like to do a differential analysis sometime soon.

 http://paste.openstack.org/show/48644/



 Your opinions are very welcome!



 If you prefer, please try to sync up with us on the #openstack-glance
 channel while we try to fit the design as per the use cases.

 Thanks,

 -Nikhil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for your efforts to this useful part! I know your(s) idea is
worked out by varying periods of times and brainstormed.

My idea can cover follow three key parts at least, those are not
exists currently, and it not only making task executor as configurable
as possible for end user, but also effect internal implementation.

1. As a plugin structure, the task executor should can tell glance
what task type(s) it can support. currently Glance only allow end user
request a task which type must in a fixed built-in list, it's an
unacceptable limitation to end user and task executor plugin developer
IMO. Actually this limitation cause this plugin structure can not be
real plugin-able. (refer:
https://wiki.openstack.org/wiki/Glance-tasks-api#List_Tasks type
field). In my draft idea (http://paste.openstack.org/show/48644/),

2. The validation on the input data for task execution should be
handled by particular task executor but not by a single, common (as
common s possible) place, since 1st, it has limited for particular
executor plugin implementation. we don't know all executor's
validation logic in advance, so this will not work for a real plugin
structure (because an executor plugin can be developed after our
Glance release by a vender/ISV, the release only has built-in
validation logic). 2nd, that built-in validation implement will be
ugly, it will using a lot of non-OOP style like if-elseif-else
checking because it need do different validation for different
built-in task type 

Re: [openstack-dev] Waiting for someone to verify/look into a bug I have opened for Glance Client tool

2013-09-30 Thread Zhi Yan Liu
On Mon, Sep 30, 2013 at 10:50 PM, Iccha Sethi iccha.se...@rackspace.com wrote:
 Maty,

 Can you link the launchpad bug? I have used image-list for glance v2 and it 
 seemed to work fine for me. Maybe you could provide more details?
 Also maybe we should continue this discussion on openstack mailing list than 
 the dev list?

 Thanks,
 Iccha

 -Original Message-
 From: GROSZ, Maty (Maty) maty.gr...@alcatel-lucent.com
 Sent: Monday, September 30, 2013 9:40am
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Waiting for someone to verify/look into a bug I have 
 opened for Glance Client tool

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Hi,

 I have opened a bug for Glance Client tool some weeks ago (at September 8th) 
 - and I still didn't receive any comment on it. Is it right? Am I wrong? 
 Anything...
 I will really appreciate if someone from the Glance client tool will have a 
 look on the bug (see below).

 Thanks,

 Maty.

 From launchpad:

 Hi,

 When I am running Glance Client tool against glance service using the 
 following command:

 glance --debug --os-image-api-version 2 image-list

 I get the followng output:

 curl -i -X GET -H 'X-Auth-Token: 
 rO0ABXc4ACAyYzlkYTk4ZDQwZGVmNWU2MDE0MGZjZDI0OThiMzk3MQAGbWdyb3N6AAQzMDQ3AAABQP0JOAs'
  -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' -k 
 https://cb.alucloud.local/al-openstack/v2/schemas/image
 (9, 'Bad file descriptor')

 

 *BUT* when I am running just the exact curl command you see in the debug log

 curl -i -X GET -H 'X-Auth-Token: 
 rO0ABXc4ACAyYzlkYTk4ZDQwZGVmNWU2MDE0MGZjZDI0OThiMzk3MQAGbWdyb3N6AAQzMDQ3AAABQP0JOAs'
  -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' -k 
 https://cb.alucloud.local/al-openstack/v2/schemas/image

 I get what am I expecting to get - the JSON schema:

 HTTP/1.1 200 OK
 Date: Sun, 08 Sep 2013 09:51:04 GMT
 Content-Type: application/json
 Content-Length: 4958
 Connection: close

 {type:object,properties:{id:{type:string},name:{type:string},visibility:{type:string,enum:[public,private]},file:{type:string},status:{type:string},minDisk:{type:integer},minRam:{type:integer},progress:{type:integer},userId:{type:string},metadata:{type:object},self:{type:string},size:{type:number},schema:{type:string},checksum:{type:string},customerId:{type:string},updated_at:{type:string},created_at:{type:string},container_format:{type:string,enum:[ovf,bare,aki,ari,ami]},disk_format:{type:string,enum:[raw,vhd,vmdk,vdi,iso,qcow2,aki,ari,ami]},name:image}



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hello Maty,

Thanks for you report it, but as iccha mentioned actually I can't
reproduce it also. You can create a bug in launchpad.net for Glance
and provide what you have.

zhiyan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposed S3 multi-part upload functionality

2013-09-29 Thread Zhi Yan Liu
On Mon, Sep 30, 2013 at 12:22 PM, Masashi Ozawa moz...@cloudian.com wrote:
 Hi everyone,

 We already created the blueprint for this feature below belore, if
 Glance can use S3 MultiPart Upload REST API for the large objects in
 future release as Amazon recommends it for the large object to be
 uploaded, I believe that it's very useful for the customers and a good
 thing for AWS and other S3 servers.

 https://blueprints.launchpad.net/glance/+spec/s3-multi-part-upload

 However it's not in the future release list so we actually implemeted
 this feature based on Grizzly for an internal testing porpose and it
 works so far.

 Made the implementation strategy below. So please review this so that
 we can hopefully have this feature in the future openstack release.

 - Proposal S3 multi-part upload functionality
 https://etherpad.openstack.org/s3-multi-part-upload

 thanks,
 - Ozawa
 --
 Cloudian KK - http://cloudian.jp/
 Masashi Ozawa moz...@cloudian.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hello Masashi Ozawa,

I consider it's a worth enhancement for s3 store driver, with a minor
change and a boto requirement update. You can just prepare your patch
base on the trunk code and commit it to Gerrit to allow team take a
review If you like.
Actually I'm just thinking if a resume-broken-transfer (image
uploading and downloading) is a useful thing for Glance as a common
requirement/enhancement. Any thoughts from you(s)?

thanks,
zhiyan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FFE Request: Make RBD Usable for Ephemeral Storage

2013-09-17 Thread Zhi Yan Liu
On Wed, Sep 18, 2013 at 6:16 AM, Mike Perez thin...@gmail.com wrote:
 Folks,

 Currently in Havana development, RBD as ephemeral storage has serious
 stability
 and performance issues that makes the Ceph cluster a bottleneck for using an
 image as a source.

 Nova has to currently communicate with the external service Glance, which
 has
 to talk to the separate Ceph storage backend to fetch path information. The
 entire image is then downloaded to local disk, and then imported from local
 disk to RBD. This leaves a stability concern, especially with large images
 for
 the instance to be successfully created, due to unnecessary data pulling and
 pushing for solutions like RBD.

 Due to the fact we have to do a import from local disk to RBD, this can make
 performance even slower than a normal backend filesystem since the import is
 single threaded.

 This can be eliminated by instead having Nova's RBD image backend utility
 communicate directly with the Ceph backend to do a copy-on-write of the
 image.
 Not only does this greatly improve stability, but performance is drastically
 improved by not having to do a full copy of the image. A lot of the code to
 make this happen came from the RBD Cinder driver which has been stable and
 merged for quite a while.

 Bug: https://code.launchpad.net/bugs/1226351
 Patch: https://review.openstack.org/#/c/46879/1

 Thanks,
 Mike Perez

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Mike Perez, folks,

I absolutely agree use zero-copy approach to prepare template image is
a good idea, such as CoW. But after check your patch I have some
concerns on the currently implementation.

Actually I had prepared some dedicated BPs [1][2] and a patch [3] to
cover such requirements and problems around zero-copy (aka your
'direct_fetch') image preparing, it been implemented as a framework
and allow other people realize such plug-ins for a particular image
storage backend/location. So I'd very like to invite you (and Josh
Durgin) to take a look on them, I believe (and welcome) your stuff
within #46879 around RBD image handling can be implemented as a
RBDImangeHandler plug-ins under my framework.

I consider above implementation is better, since framework code within
#33409 can handle most common logic such as plug-ins loading, image
handler selecting base on image location, image multiple location
supporting and etc.. And each particular image handler can just
implement such special methods easily and don't need to rebuild the
existed (and tested) part.

Of cause, with the production of new handlers we probably need add
more interfaces and pass more context data to the structure /
ImageHandler base class as needed,  we can talk this in irc.

[1] https://blueprints.launchpad.net/nova/+spec/image-multiple-location
[2] 
https://blueprints.launchpad.net/nova/+spec/effective-template-base-image-preparing
[3] https://review.openstack.org/#/c/33409/

thanks,
zhiyan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev