On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote:
> * upload into a swift container, with an automatic expiration set, the
> de-duplicated and compressed tarball created with something like:
>   # docker save $(docker images -q) | gzip -1 > all.tar.xz
> (I expect it will be something like a 2G file)
> * something similar for DLRN repos prolly, I'm not an expert for this part.
> Then those stored artifacts to be picked up by the next step in the graph,
> deploying undercloud and overcloud in the single step, like:
> * fetch the swift containers with repos and container images

I do worry a little about network fragility here, as well as
extremely variable performance. Randomly-selected job nodes could be
shuffling those files halfway across the globe so either upload or
download (or both) will experience high round-trip latency as well
as potentially constrained throughput, packet loss,
disconnects/interruptions and so on... all the things we deal with
when trying to rely on the Internet, except magnified by the
quantity of data being transferred about.

Ultimately still worth trying, I think, but just keep in mind it may
introduce more issues than it solves.
Jeremy Stanley

Attachment: signature.asc
Description: PGP signature

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to