On Fri, Feb 7, 2014 at 2:59 PM, Robert Collins
<[email protected]> wrote:
> On 7 February 2014 23:02, Taurus Cheung <[email protected]> wrote:
>> Hi,
>>
>>
>>
>> I am working on deploying images to bare-metal machines using nova
>> bare-metal. In existing implementation in nova-baremetal-deploy-helper.py,
>> there's only 1 worker to write image to bare-metal machines. If there is a
>> number of bare-metal instances to deploy, they need to queue up and wait to
>> be served by the single worker. Would the future implementation be improved
>> to support multiple workers?
>
> I think we'd all like to do multiple deploys at once, but there are
> significant thrashing risks in just running concurrent dd's - for
> instance, datacentre networks are faster than single disks (so
> cloud-scale architectures have - paradoxically to most folk :)) more
> network bandwidth than persistent IO bandwidth. In fact this patch
> (https://review.openstack.org/#/c/71219/) reduces a source of
> thrashing (based on testing on our prod hardware) to improve overall
> performance.

I agree here, and basically saw that exact behavior when I
experimented with adding multiple workers to
nova-baremetal-deploy-helper, and saw almost no performance
improvement. Starting 2 workers just resulted in each individual
deploy basically taking twice as long.

>
> Longer term with Ironic I can see multicast/bittorrent/that sort of
> thing being used to achieve efficient concurrency when deploying many
> identical images.

-- 
-- James Slagle
--

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to