I also recommend approach #2.

For, approach #1, 
1) You have to maintain a state machine if you want to synchronize the image to 
3 or more backend. 
2) Not always 3 or more backend will be synchronized successfully, unless you 
make it like a transaction, it's hard to process the synchronization broken, 
and re-sync.
3) As more and more backend, the transaction will often failed to synchronize 
the image to all destination.

For approach #2
The image status is required to enhanced to reflect the image availability for 
each location. And the consumer of the glance api can check to see whether the 
image is ready for specific location, if not ready, either trigger a 
synchronization immediately or report failure. 

If the image is available only after all backend have been synchronized, the 
end user experience is good, but you have to wait for all location is ready, 
and it's not easy to do that: considering the synchronization broken, backend 
leave and join..... the more backend, the harder is.

Another recommendation is to trigger the synchronization on demand: when the 
first VM using this image is booted, the image will be synchronized to the 
proper backend for the new VM. The shortage for this approach is that the first 
VM booting will last longer than usual, but the process is much more stable.

Best Regards
Chaoyi Huang ( Joe Huang )

-----Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, January 14, 2015 10:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Replication on image create

On 01/13/2015 04:55 PM, Boden Russell wrote:
> Looking for some feedback from the glance dev team on a potential BP…
>
> The use case I’m trying to solve is —>
>
> "As an admin, I want my glance image bits replicated to multiple store 
> locations (of the same store type) during a glance create operation."
>
> For example, I have 3 HTTP based backend locations I want to store my 
> glance image bits on. When I create / upload a new glance image, I 
> want those bits put onto all 3 HTTP based locations and have the 
> image's 'locations' metadata properly reflect all stored locations.
>
> There are obviously multiple approaches to getting this done.
>
> [1] Allow per glance store drivers the ability to manage config and 
> "connectivity" to multiple backends. For example in the glance-api.conf:
>
> [DEFAULT]
> store_backends = http1,http2,http3
> ...
> [http1]
> # http 1 backend props
> ...
> [http2]
> # http 2 backend props
> ...
> [http2]
> # http 2 backend props
> ...
>
> And then in the HTTP store driver use a configuration approach like 
> cinder multi-backend does (e.g.:
> https://github.com/openstack/cinder/blob/2f09c3031ef2d2db598ec4c56f6127e33d29b2cc/cinder/volume/configuration.py#L52).
> Here, the store driver handles all the logic w/r/t pushing the image 
> bits to all backends, etc..

The problem with this solution is that the HTTP Glance storage backend is 
readonly. You cannot upload an image to Glance using the http backend.

> [2] A separate (3rd party) "process" which handles the image 
> replication and location metadata updates... For example listens for 
> the glance notification on create and then takes the steps necessary 
> to replicate the bits elsewhere and update the image metadata (locations).

This is the solution that I would recommend. Frankly, this kind of replication 
should be an async out-of-band process similar to bittorrent. Just have 
bittorrent or rsync or whatever replicate the image bits to a set of target 
locations and then call the
glanceclient.v2.client.images.add_location() method:

https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L211

to add the URI of the replicated image bits.

> [3] etc...
>
>
> In a prototype I implemented #1 which can be done with no impact 
> outside of the store driver code itself.

I'm not entirely sure how you did that considering the http storage backend is 
readonly. Are you saying you implemented the add() method for the 
glance_store._drivers.http.Store class?

Best,
-jay

 > I prefer #1 over #2 given approach #2
> may need pull the image bits back down from the initial location in 
> order to push for replication; additional processing.
>
> Is the dev team adverse to option #1 for the store driver's who wish 
> to implement it and / or what are the other (preferred) options here?
>
>
> Thank you,
> - boden
>
>
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to