** Also affects: curtin (Ubuntu)
   Importance: Undecided
       Status: New

** Also affects: curtin (Ubuntu Xenial)
   Importance: Undecided
       Status: New

** Changed in: curtin (Ubuntu)
       Status: New => Fix Released

** Changed in: curtin (Ubuntu Xenial)
       Status: New => Fix Committed

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1618429

Title:
  Curtin doesn't clean up previous MD configuration

Status in curtin:
  Fix Committed
Status in curtin package in Ubuntu:
  Fix Released
Status in curtin source package in Xenial:
  Fix Committed

Bug description:
  [Impact]

   * On some machines which have existing MDADM RAID metadata on one or
     more of disks, curtin fails to remove this existing metadata when
     instructed to do so and fails to install on such machines.

     Curtin has been updated to ignore mdadm asseble errors specifically
     in the case where curtin has been instructed to wipe a designated
     device. In the above case, curtin encountered an unexpected return
     code from mdadm assemble command which is not relevant since curtin
     is going to wipe the underlying device for re-installation.
     
  [Test Case]

   * Install proposed curtin package and deploy to a machine with a
     partial mdadm raid array which cannot be properly assembled.

    PASS: Successfully deploy image with RAID configuration included.

    FAIL: Deployment fails with the following error:

      Command: ['mdadm', '--assemble', '--scan']
      Exit code: 3
      Reason: -
      Stdout: ''
      Stderr: u'mdadm: /dev/md/4 assembled from 3 drives
              not enough to start the array.

  [Regression Potential]

   * Users requesting curtin 'preserve' existing raid configurations may
     be impacted.

  
  [Original Description]

  When deploying a machine in MAAS with a MD setup, deployment fails.
  Inspection shows that curtin doesn't clean up existin MD devices. On a
  failed machine I can see in dmesg:

  [   22.352672] md/raid1:md2: active with 2 out of 2 mirrors
  [   22.730212] md/raid1:md1: active with 2 out of 2 mirrors

  these are MD devices from previous deployment. Instead of deleting
  those, curtin tries to create a new one. So /proc/mdstat shows:

  Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] 
[raid10]
  md3 : inactive md1[1](S) md2[2](S)
        3125299568 blocks super 1.2

  md1 : active raid1 sdd[1] sdc[0]
        1562649792 blocks super 1.2 [2/2] [UU]

  md2 : active raid1 sdf[1] sde[0]
        1562649792 blocks super 1.2 [2/2] [UU]

  unused devices: <none>

  MAAS's storage config appears to be correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1618429/+subscriptions

_______________________________________________
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to     : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp

Reply via email to