Here's a better link below.

I have seen enough bad things happen to pool devices when hardware is
changed or firmware is updated to recommend that the pool is exported
first, even an HBA firmware update.

Either shutting the system down (where pool is hosted) or exporting
the pool should do it.

Always have good backups.



Considerations for ZFS Storage Pools - see the last bullet

On 07/17/12 18:47, Damon Pollard wrote:

LSI 1068E has IR and IT firmwares + I have gone from IR -> IT and IT ->
IR without hassle.

Damon Pollard

On Wed, Jul 18, 2012 at 8:13 AM, Jason Usher <
<>> wrote:

    Ok, and your LSI 1068E also had alternate IR and IT firmwares, and
    you went from IR -> IT ?

    Is that correct ?


    --- On Tue, 7/17/12, Damon Pollard <
    <>> wrote:

    From: Damon Pollard <
    Subject: Re: [zfs-discuss] Has anyone switched from IR -> IT
    firmware on the fly ? (existing zpool on LSI 9211-8i)
    To: "Jason Usher" < <>>
    Cc: <>
    Date: Tuesday, July 17, 2012, 5:05 PM

    Hi Jason,
    I have done this in the past. (3x LSI 1068E - IBM BR10i).
    Your pool has no tie with the hardware used to host it (including
    your HBA). You could change all your hardware, and still import your
    pool correctly.

    If you really want to be on the safe side; you can export your pool
    before the firmware change and then import when
    your satisfied the firmware change is complete.
    Damon Pollard

    On Wed, Jul 18, 2012 at 6:14 AM, Jason Usher <
    <>> wrote:

    We have a running zpool with a 12 disk raidz3 vdev in it ... we gave
    ZFS the full, raw disks ... all is well.

    However, we built it on two LSI 9211-8i cards and we forgot to
    change from IR firmware to IT firmware.

    Is there any danger in shutting down the OS, flashing the cards to
    IT firmware, and then booting back up ?

    We did not create any raid configuration - as far as we know, the
    LSI cards are just passing through the disks to ZFS ... but maybe not ?

    I'd like to hear of someone else doing this successfully before we
    try it ...

    We created the zpool with raw disks:

    zpool create -m /mount/point MYPOOL raidz3 da{0,1,2,3,4,5,6,7,8,9,10,11}

    and diskinfo tells us that each disk is:

    da1     512     3000592982016   5860533168

    The physical label (the sticker) on the disk also says 5860533168
    sectors ... so that seems to line up ...

    Someone else in the world has made this change "while inflight" and
    can confirm ?



    zfs-discuss mailing list <>

zfs-discuss mailing list
zfs-discuss mailing list

Reply via email to