[openstack-dev] [nova][baremetal] Scheduling baremetal deployment on different hw model
Hi, I am working on deploying images to bare-metal machines using nova bare-metal. My datacenter has 2 types of hw models, IBM and Dell. In existing implementation, if I want to deploy image on specified type of hw model, I need to setup 2 baremetal compute nodes, one for container of IBM machine, the other for Dell machine. Then baremetal register machines to their corresponding compute node. Finally use nova flavor and heterogeneous group to map specified compute node so I can explicitly specify the hw model to deploy, as illustrated as following flow chart: Flavor_IBM - (mapping by flavor extra_spec) - Heterogeneous_Group_IBM - Compute_Node_IBM - IBM machines Flavor_Dell - (mapping by flavor extra_spec) - Heterogeneous_Group_Dell - Compute_Node_Dell - Dell machines The existing approach has a drawback: I need to setup 1 baremetal compute node for each hw model. If I have 10 hw models in my datacenter, I need to setup 10 baremetal compute node, which would be a high overhead. Is there any update in ironic to tackle this? I think one of the possible enhancement is adding a field like hw_model in nova.bm_nodes DB and passing to nova scheduler, so different hw models of machine can under the same baremetal compute node and heterogeneous group. Just using different extra_spec in nova flavor to specifiy hw_model. Is it a good idea? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][baremetal] Scheduling baremetal deployment on different hw model
Hi Rob, Thanks your reply. As I know, Ironic not yet graduate. It is still under rapid development as replied by Chris Krelle in Feb-2014: http://lists.openstack.org/pipermail/openstack-dev/2014-February/026647.html . Or I should change to Ironic now? No - I am not group IBM and Dell machines together. They are under the same datacenter and I want to provision them under a single OpenStack controller, which I think a very common use case. What I want to do is to choose specified hw model (IBM or Dell) when I deploy image in baremetal style. Regards, Taurus -Original Message- From: Robert Collins [mailto:robe...@robertcollins.net] Sent: Monday, June 30, 2014 12:46 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Taurus Cheung Subject: Re: [openstack-dev] [nova][baremetal] Scheduling baremetal deployment on different hw model Firstly, use Ironic. Nova BM is deprecated. Secondly, yes, you can use extra-specs, but you only need to do that if your machines are identical in CPU, disk and memory - which the scheduler will look at anyway. Why do you need to group IBM and Dell machines together? -Rob On 30 June 2014 16:26, Taurus Cheung taurus.che...@harmonicinc.com wrote: Hi, I am working on deploying images to bare-metal machines using nova bare-metal. My datacenter has 2 types of hw models, IBM and Dell. In existing implementation, if I want to deploy image on specified type of hw model, I need to setup 2 baremetal compute nodes, one for container of IBM machine, the other for Dell machine. Then baremetal register machines to their corresponding compute node. Finally use nova flavor and heterogeneous group to map specified compute node so I can explicitly specify the hw model to deploy, as illustrated as following flow chart: Flavor_IBM - (mapping by flavor extra_spec) - Heterogeneous_Group_IBM - Compute_Node_IBM - IBM machines Flavor_Dell - (mapping by flavor extra_spec) - Heterogeneous_Group_Dell - Compute_Node_Dell - Dell machines The existing approach has a drawback: I need to setup 1 baremetal compute node for each hw model. If I have 10 hw models in my datacenter, I need to setup 10 baremetal compute node, which would be a high overhead. Is there any update in ironic to tackle this? I think one of the possible enhancement is adding a field like hw_model in nova.bm_nodes DB and passing to nova scheduler, so different hw models of machine can under the same baremetal compute node and heterogeneous group. Just using different extra_spec in nova flavor to specifiy hw_model. Is it a good idea? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Deprovision of bare-metal nodes
Hi, I am working on deploying images to bare-metal machines using nova bare-metal. After deployment, I would like to deprovision (disconnect) bare-metal nodes from OpenStack controller/compute, so these bare-metal nodes can run standalone. A typical scenario is that I have a workstation with OpenStack controller and nova baremetal compute installed. During bare-metal deployment, I plug the workstation into the network. After deployment, I disconnect it from the network. Is this use-case typical, possible and without side-effect? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Status of nova baremetal and ironic
I am working on deploying images to bare-metal machines using nova bare-metal. So far working well. I know Ironic is under rapid development. Could I know the current status of Ironic and the suitable time to shift from nova baremetal to Ironic? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Delete disk image in nova after deploy image to bare-metal machine
Hi I am working on deploying images to bare-metal machines using nova bare-metal. In current architecture, disk images are kept at /var/lib/nova/instances after the image is written (by dd) to the hard disk of bare-metal machines. But these disk images file are no longer needed. Does nova bare-metal support deleting disk images file at /var/lib/nova/instances after the image is written to bare-metal machines, in order to free up the storage? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Support configurable inject items in nova Bare-metal
Hi, I am working on deploying images to bare-metal machines using nova bare-metal. In current design, some files like hostname, network config file and meta.json are injected into the image before writing to bare-metal machines. Can we control which items to be injected into the image? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Support multiple image write workers in nova bare-metal
Hi, I am working on deploying images to bare-metal machines using nova bare-metal. In existing implementation in nova-baremetal-deploy-helper.py, there's only 1 worker to write image to bare-metal machines. If there is a number of bare-metal instances to deploy, they need to queue up and wait to be served by the single worker. Would the future implementation be improved to support multiple workers? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][baremetal] Partition layout of image used in nova bare-metal
Hi, I am working on deploying images to bare-metal machines using nova bare-metal. In current design, nova bare-metal would first write a partition layout of root partition and swap partition, then write the image to root partition. It seems that the logic assumes there's no partition table inside the image. Without code change, does nova bare-metal support writing image with partition table embedded in it? Regards, Taurus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev