Re: LSI SAS2008 mps driver preferred firmware version

2015-11-18 Thread Slawa Olhovchenkov
On Wed, Nov 18, 2015 at 08:15:15AM -0800, Freddie Cash wrote:

> On Wed, Nov 18, 2015 at 2:25 AM, Slawa Olhovchenkov  wrote:
> 
> > On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov 
> > wrote:
> > > ​Did the original disk get labelled automatically?  No, you had to do
> > that
> > > when you first started using it.  So, why would you expect a
> > > replaced disk
> >
> > Initial labeling is problem too.
> > For new chassis with 36 identical disk (already installed) -- what is
> > simple way to labeling disks?
> >
> 
> ​That's the easy part.  Boot with all the drives pulled out a bit, so they
> aren't connected/detected.
> 
> Insert first disk, wait for it to be detected and get a /dev node, then
> partition/label it.  Repeat for each disk.  Takes about 5 minutes to label
> a 45-bay JBOD chassis.

Hmm, from me to server more then 1700km, how I can do this?

> No different than how you would get the serial number off each disk before
> inserting them into the chassis, so you'd know for sure which slot they're
> in.

This is do by manufacturer.
Or in DC after service ordering.
I am don't assemble servers, in general.
And I am don't see servers and don't know how they look.

> "Replace disk in bay with blinked led"
> >
> > Author: bapt
> > Date: Sat Sep  5 00:06:01 2015
> >
> 
> ​And, how did you manage to do that before Sep 5, 2015?​

Deteched disk don't blink activity LED.

> Usaly serial number can be read w/o pull disk (for SuperMicro cases
> > this is true, remote hand replaced disk by S/N for me w/o pull every disk).
> >
> 
> ​How?  We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
> server chassis in our data centre here.  None of them allow you to read the
> serial number off the physical disk without pulling the disk out
> completely.​  You'd have to manually label each bay with the serial number
> before inserting the disk into the chassis ... which is no different from
> labelling the device in the OS.  Except it's much faster to find a 3D
> co-ordinate (enc0a6) than to scan every bay looking for a specific serial
> number.

For SC847A this do for me in NL DC (as I understand -- through holes
at an angle).

> But, to each their own.  :)  Everyone has their "perfect" system that works
> for them.  :D
> 
> -- 
> Freddie Cash
> fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-18 Thread Freddie Cash
On Wed, Nov 18, 2015 at 2:25 AM, Slawa Olhovchenkov  wrote:

> On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:
>
> > On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov 
> wrote:
> > ​Did the original disk get labelled automatically?  No, you had to do
> that
> > when you first started using it.  So, why would you expect a
> > replaced disk
>
> Initial labeling is problem too.
> For new chassis with 36 identical disk (already installed) -- what is
> simple way to labeling disks?
>

​That's the easy part.  Boot with all the drives pulled out a bit, so they
aren't connected/detected.

Insert first disk, wait for it to be detected and get a /dev node, then
partition/label it.  Repeat for each disk.  Takes about 5 minutes to label
a 45-bay JBOD chassis.

No different than how you would get the serial number off each disk before
inserting them into the chassis, so you'd know for sure which slot they're
in.

"Replace disk in bay with blinked led"
>
> Author: bapt
> Date: Sat Sep  5 00:06:01 2015
>

​And, how did you manage to do that before Sep 5, 2015?​

Usaly serial number can be read w/o pull disk (for SuperMicro cases
> this is true, remote hand replaced disk by S/N for me w/o pull every disk).
>

​How?  We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
server chassis in our data centre here.  None of them allow you to read the
serial number off the physical disk without pulling the disk out
completely.​  You'd have to manually label each bay with the serial number
before inserting the disk into the chassis ... which is no different from
labelling the device in the OS.  Except it's much faster to find a 3D
co-ordinate (enc0a6) than to scan every bay looking for a specific serial
number.

But, to each their own.  :)  Everyone has their "perfect" system that works
for them.  :D

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-18 Thread Slawa Olhovchenkov
On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:

> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov  wrote:
> 
> > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman 
> > wrote:
> > > > As already mentioned, unless you are using zfs, use gpart to label you
> > file
> > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> > fstab.
> > > >
> > >
> > > ​Even if you are using ZFS, labelling the drives with the location of the
> > > disk in the system (enclosure, column, row, whatever) makes things so
> > much
> > > easier to work with when there are disk-related issues.
> > >
> > > Just create a single partition that covers the whole disk, label it, and
> > > use the label to create the vdevs in the pool.​
> >
> > Bad idea.
> > Re-placed disk in different bay don't relabel automaticly.
> >
> 
> ​Did the original disk get labelled automatically?  No, you had to do that
> when you first started using it.  So, why would you expect a
> replaced disk

Initial labeling is problem too.
For new chassis with 36 identical disk (already installed) -- what is
simple way to labeling disks?

> to get labelled automatically?

Consistency keeping is another problem.

> Offline the dead/dying disk.
> Physically remove the disk.
> Insert the new disk.
> Partition / label the new disk.
> "zfs replace" using the new label to get it into the pool.​

New disk can be inserted in free bay.
This is may be done by remote hand.
And I can be miss information where disk is placed.


> > Other issuse where disk placed in bay some remotely hands in data
> > center -- I am relay don't know how disk distributed by bays.
> >
> 
> ​You label the disks as they are added to the system the first time.  That
> way, you always know where each disk is located, and you only deal with the
> labels.
> 
> Then, when you need to replace a disk (or ask someone in a remote location
> to replace it) it's a simple matter:  the label on the disk itself tells
> you where the disk is physically located.  And it doesn't change if the
> controller decides to change the direction it enumerates devices.
> 
> Which is easier to tell someone in a remote location:

"Replace disk in bay with blinked led"

Author: bapt
Date: Sat Sep  5 00:06:01 2015
New Revision: 287473
URL: https://svnweb.freebsd.org/changeset/base/287473

Log:
  Add a new sesutil(8) utility
  
  This is an utility for managing SCSI Enclosure Services (SES)
  device.
  
  For now only one command is supported "locate" which will change the
  test of the
  external LED associated to a given disk.
  
  Usage if the following:
  sesutil locate disk [on|off]
  
  Disk can be a device name: "da12" or a special keyword: "all".
  
>   Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
> or
>   Replace the disk called da36?​
> ​or
>   Find the disk with serial number ?
> or
>   Replace the disk where the light is (hopefully) flashing (but I can't
> tell you which enclosure, front or back, or anything else like that)?
> 
> The first one lets you know exactly where the disk is located physically.
> 
> The second one just tells you the name of the device as determined by the
> OS, but doesn't tell you anything about where it is located.  And it can
> change with a kernel update, driver update, or firmware update!
> 
> The third requires you to pull every disk in turn to read the serial number
> off the drive itself.

Usaly serial number can be read w/o pull disk (for SuperMicro cases
this is true, remote hand replaced disk by S/N for me w/o pull every disk).

> In order for the second or third option to work, you'd have to write down
> the device names and/or serial numbers and stick that onto the drive bay
> itself.​
> 
> 
> > Best way for identify disk -- uses enclouse services.
> >
> 
> ​Only if your enclosure services are actually working (or even enabled).
> I've yet to work on a box where that actually works (we custom-build our
> storage boxes using OTS hardware).
> 
> Best way, IMO, is to use the physical location of the device as the actual
> device name itself.  That way, there's never any ambiguity at the physical
> layer, the driver layer, the OS layer, or the ZFS pool layer.​
> 
> 
> > I have many sites with ZFS on whole disk and some sites with ZFS on
> > GPT partition. ZFS on GPT more heavy for administration.
> >
> 
> ​It's 1 extra step:  partition the drive, supplying the location of the
> drive as the label for the partition.
> 
> Everything else works exactly the same.
> 
> I used to do everything with whole drives and no labels.  Did that for
> about a month, until 2 separate drives on separate controllers died (in a
> 24-bay setup) and I couldn't figure out where they were located as a BIOS
> upgrade changed which controller loaded first.  And then I had to work on a
> server that someone else configured with direct-attach bays (24 

Re: ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread Freddie Cash
On Tue, Nov 17, 2015 at 12:08 AM, Patrick M. Hausen  wrote:

> Hi, all,
>
> > Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> >
> > ​You label the disks as they are added to the system the first time.
> That
> > way, you always know where each disk is located, and you only deal with
> the
> > labels.
>
> we do the same for obvious reasons. But I always wonder about the possible
> downsides, because ZFS documentation explicitly states:
>
> ZFS operates on raw devices, so it is possible to create a storage
> pool comprised of logical
> volumes, either software or hardware. This configuration is not
> recommended, as ZFS works
> best when it uses raw physical devices. Using logical volumes
> might sacrifice performance,
> reliability, or both, and should be avoided.
>
> (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
>
> Can anyone shed some lght on why not using raw devices might sacrifice
> performance or reliability? Or is this just outdated folklore?
>

​On Solaris, using raw devices allows ZFS to enable the caches on the disks
themselves, while using any kind of partitioning on the disk forces the
caches to be disabled.

This is not an issue on FreeBSD due to the way GEOM works.  Caches on disks
are enabled regardless of how the disk is accessed (raw, dd-partitioned,
MBR-partitioned, GPT-partitioned, gnop, geli, whatever).

This is a common misconception and FAQ with ZFS on FreeBSD and one reason
to not take any Sun/Oracle documentation at face value, as it doesn't
always apply to FreeBSD.

There were several posts from pjd@ about this back in the 7.x days when ZFS
was first imported to FreeBSD.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread Patrick M. Hausen
Hi, all,

> Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> 
> ​You label the disks as they are added to the system the first time.  That
> way, you always know where each disk is located, and you only deal with the
> labels.

we do the same for obvious reasons. But I always wonder about the possible
downsides, because ZFS documentation explicitly states:

ZFS operates on raw devices, so it is possible to create a storage pool 
comprised of logical
volumes, either software or hardware. This configuration is not 
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes might 
sacrifice performance,
reliability, or both, and should be avoided.

(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)

Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?

Thanks,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread krad
From what i remember its a  control thing. If you have another layer below
zfs, be it software based or hardware based, zfs cant be sure what is going
on, therefore cant guarantee anything. This is quite a big thing when it
comes to data integrity which is a big reason to use zfs. I remember having
to be very careful with some external caching arrays and making sure that
they flushed correctly as often they ignore the scsi flush commands. This
is one reason why I would always use the IT based firmware rather then the
RAID one, as its less likely to lead to issues.

On 17 November 2015 at 08:08, Patrick M. Hausen  wrote:

> Hi, all,
>
> > Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> >
> > ​You label the disks as they are added to the system the first time.
> That
> > way, you always know where each disk is located, and you only deal with
> the
> > labels.
>
> we do the same for obvious reasons. But I always wonder about the possible
> downsides, because ZFS documentation explicitly states:
>
> ZFS operates on raw devices, so it is possible to create a storage
> pool comprised of logical
> volumes, either software or hardware. This configuration is not
> recommended, as ZFS works
> best when it uses raw physical devices. Using logical volumes
> might sacrifice performance,
> reliability, or both, and should be avoided.
>
> (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
>
> Can anyone shed some lght on why not using raw devices might sacrifice
> performance or reliability? Or is this just outdated folklore?
>
> Thanks,
> Patrick
> --
> punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
> Tel. 0721 9109 0 * Fax 0721 9109 100
> i...@punkt.de   http://www.punkt.de
> Gf: Jürgen Egeling  AG Mannheim 108285
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-17 Thread krad
I disagree, get the remote hands to copy the serial number to an easily
visible location on the drive when its in the enclosure. Then label the
drives with the serial number (or a compatible version of it). That way the
label is tied to the drive, and you dont have to rely on the remote hands
100%. Better still do the physical labelling yourself

On 16 November 2015 at 21:19, Freddie Cash  wrote:

> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov 
> wrote:
>
> > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman 
> > wrote:
> > > > As already mentioned, unless you are using zfs, use gpart to label
> you
> > file
> > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> > fstab.
> > > >
> > >
> > > ​Even if you are using ZFS, labelling the drives with the location of
> the
> > > disk in the system (enclosure, column, row, whatever) makes things so
> > much
> > > easier to work with when there are disk-related issues.
> > >
> > > Just create a single partition that covers the whole disk, label it,
> and
> > > use the label to create the vdevs in the pool.​
> >
> > Bad idea.
> > Re-placed disk in different bay don't relabel automaticly.
> >
>
> ​Did the original disk get labelled automatically?  No, you had to do that
> when you first started using it.  So, why would you expect a replaced disk
> to get labelled automatically?
>
> Offline the dead/dying disk.
> Physically remove the disk.
> Insert the new disk.
> Partition / label the new disk.
> "zfs replace" using the new label to get it into the pool.​
>
>
> > Other issuse where disk placed in bay some remotely hands in data
> > center -- I am relay don't know how disk distributed by bays.
> >
>
> ​You label the disks as they are added to the system the first time.  That
> way, you always know where each disk is located, and you only deal with the
> labels.
>
> Then, when you need to replace a disk (or ask someone in a remote location
> to replace it) it's a simple matter:  the label on the disk itself tells
> you where the disk is physically located.  And it doesn't change if the
> controller decides to change the direction it enumerates devices.
>
> Which is easier to tell someone in a remote location:
>   Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
> or
>   Replace the disk called da36?​
> ​or
>   Find the disk with serial number ?
> or
>   Replace the disk where the light is (hopefully) flashing (but I can't
> tell you which enclosure, front or back, or anything else like that)?
>
> The first one lets you know exactly where the disk is located physically.
>
> The second one just tells you the name of the device as determined by the
> OS, but doesn't tell you anything about where it is located.  And it can
> change with a kernel update, driver update, or firmware update!
>
> The third requires you to pull every disk in turn to read the serial number
> off the drive itself.
>
> In order for the second or third option to work, you'd have to write down
> the device names and/or serial numbers and stick that onto the drive bay
> itself.​
>
>
> > Best way for identify disk -- uses enclouse services.
> >
>
> ​Only if your enclosure services are actually working (or even enabled).
> I've yet to work on a box where that actually works (we custom-build our
> storage boxes using OTS hardware).
>
> Best way, IMO, is to use the physical location of the device as the actual
> device name itself.  That way, there's never any ambiguity at the physical
> layer, the driver layer, the OS layer, or the ZFS pool layer.​
>
>
> > I have many sites with ZFS on whole disk and some sites with ZFS on
> > GPT partition. ZFS on GPT more heavy for administration.
> >
>
> ​It's 1 extra step:  partition the drive, supplying the location of the
> drive as the label for the partition.
>
> Everything else works exactly the same.
>
> I used to do everything with whole drives and no labels.  Did that for
> about a month, until 2 separate drives on separate controllers died (in a
> 24-bay setup) and I couldn't figure out where they were located as a BIOS
> upgrade changed which controller loaded first.  And then I had to work on a
> server that someone else configured with direct-attach bays (24 cables)
> that were connected almost at random.
>
> Then I used glabel(8) to label the entire disk, and things were much
> better.  But that didn't always play well with 4K drives, and replacing
> drives that were the same size didn't always work as the number of sectors
> in each disk was different (ZFS plays better with this now).
>
> Then I started to GPT partition things, and life has been so much simpler.
> All the partitions are aligned to 1 MB, and I can manually set the size of
> the partition to work around different physical sector counts.  All the
> partitions are labelled using the physical location of the disk 

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-16 Thread Slawa Olhovchenkov
On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:

> On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman  wrote:
> 
> > On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos  wrote:
> >
> > >
> > > On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
> > >
> > > > You can do thinks in /boot/loader.conf to hard code bus and drive
> > > > assignments.
> > > >
> > > > e.g.
> > > >
> > > > hint.da.0.at="scbus0"
> > > > hint.da.0.target="19"
> > > > hint.da.0.unit="0"
> > > > hint.da.1.at="scbus0"
> > > > hint.da.1.target="18"
> > > > hint.da.1.unit="0"
> > >
> > > Beware, the target number assignment is not predictable. There's no
> > > guarantee especially if you replace
> > > a disk.
> > >
> > >
> > >
> > >
> > >
> > > Borja.
> > >
> >
> > As already mentioned, unless you are using zfs, use gpart to label you file
> > systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
> >
> 
> ​Even if you are using ZFS, labelling the drives with the location of the
> disk in the system (enclosure, column, row, whatever) makes things so much
> easier to work with when there are disk-related issues.
> 
> Just create a single partition that covers the whole disk, label it, and
> use the label to create the vdevs in the pool.​

Bad idea.
Re-placed disk in different bay don't relabel automaticly.
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
Best way for identify disk -- uses enclouse services.

I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-16 Thread Kevin Oberman
On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos  wrote:

>
> On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
>
> > You can do thinks in /boot/loader.conf to hard code bus and drive
> > assignments.
> >
> > e.g.
> >
> > hint.da.0.at="scbus0"
> > hint.da.0.target="19"
> > hint.da.0.unit="0"
> > hint.da.1.at="scbus0"
> > hint.da.1.target="18"
> > hint.da.1.unit="0"
>
> Beware, the target number assignment is not predictable. There's no
> guarantee especially if you replace
> a disk.
>
>
>
>
>
> Borja.
>

As already mentioned, unless you are using zfs, use gpart to label you file
systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkober...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LSI SAS2008 mps driver preferred firmware version

2015-11-16 Thread Freddie Cash
On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov  wrote:

> On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
>
> > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman 
> wrote:
> > > As already mentioned, unless you are using zfs, use gpart to label you
> file
> > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> fstab.
> > >
> >
> > ​Even if you are using ZFS, labelling the drives with the location of the
> > disk in the system (enclosure, column, row, whatever) makes things so
> much
> > easier to work with when there are disk-related issues.
> >
> > Just create a single partition that covers the whole disk, label it, and
> > use the label to create the vdevs in the pool.​
>
> Bad idea.
> Re-placed disk in different bay don't relabel automaticly.
>

​Did the original disk get labelled automatically?  No, you had to do that
when you first started using it.  So, why would you expect a replaced disk
to get labelled automatically?

Offline the dead/dying disk.
Physically remove the disk.
Insert the new disk.
Partition / label the new disk.
"zfs replace" using the new label to get it into the pool.​


> Other issuse where disk placed in bay some remotely hands in data
> center -- I am relay don't know how disk distributed by bays.
>

​You label the disks as they are added to the system the first time.  That
way, you always know where each disk is located, and you only deal with the
labels.

Then, when you need to replace a disk (or ask someone in a remote location
to replace it) it's a simple matter:  the label on the disk itself tells
you where the disk is physically located.  And it doesn't change if the
controller decides to change the direction it enumerates devices.

Which is easier to tell someone in a remote location:
  Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
or
  Replace the disk called da36?​
​or
  Find the disk with serial number ?
or
  Replace the disk where the light is (hopefully) flashing (but I can't
tell you which enclosure, front or back, or anything else like that)?

The first one lets you know exactly where the disk is located physically.

The second one just tells you the name of the device as determined by the
OS, but doesn't tell you anything about where it is located.  And it can
change with a kernel update, driver update, or firmware update!

The third requires you to pull every disk in turn to read the serial number
off the drive itself.

In order for the second or third option to work, you'd have to write down
the device names and/or serial numbers and stick that onto the drive bay
itself.​


> Best way for identify disk -- uses enclouse services.
>

​Only if your enclosure services are actually working (or even enabled).
I've yet to work on a box where that actually works (we custom-build our
storage boxes using OTS hardware).

Best way, IMO, is to use the physical location of the device as the actual
device name itself.  That way, there's never any ambiguity at the physical
layer, the driver layer, the OS layer, or the ZFS pool layer.​


> I have many sites with ZFS on whole disk and some sites with ZFS on
> GPT partition. ZFS on GPT more heavy for administration.
>

​It's 1 extra step:  partition the drive, supplying the location of the
drive as the label for the partition.

Everything else works exactly the same.

I used to do everything with whole drives and no labels.  Did that for
about a month, until 2 separate drives on separate controllers died (in a
24-bay setup) and I couldn't figure out where they were located as a BIOS
upgrade changed which controller loaded first.  And then I had to work on a
server that someone else configured with direct-attach bays (24 cables)
that were connected almost at random.

Then I used glabel(8) to label the entire disk, and things were much
better.  But that didn't always play well with 4K drives, and replacing
drives that were the same size didn't always work as the number of sectors
in each disk was different (ZFS plays better with this now).

Then I started to GPT partition things, and life has been so much simpler.
All the partitions are aligned to 1 MB, and I can manually set the size of
the partition to work around different physical sector counts.  All the
partitions are labelled using the physical location of the disk (originally
just row/column naming like a spreadsheet, but now I'm adding enclosure
name as well as we expand to multiple enclosures per system).  It's so much
simpler now, ESPECIALLY when I have to get someone to do something
remotely.  :)

​Everyone has their own way to manage things.  I just haven't seen any
better setup than labelling the drives themselves using their physical
location.​

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to 

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-16 Thread Freddie Cash
On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman  wrote:

> On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos  wrote:
>
> >
> > On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
> >
> > > You can do thinks in /boot/loader.conf to hard code bus and drive
> > > assignments.
> > >
> > > e.g.
> > >
> > > hint.da.0.at="scbus0"
> > > hint.da.0.target="19"
> > > hint.da.0.unit="0"
> > > hint.da.1.at="scbus0"
> > > hint.da.1.target="18"
> > > hint.da.1.unit="0"
> >
> > Beware, the target number assignment is not predictable. There's no
> > guarantee especially if you replace
> > a disk.
> >
> >
> >
> >
> >
> > Borja.
> >
>
> As already mentioned, unless you are using zfs, use gpart to label you file
> systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
>

​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so much
easier to work with when there are disk-related issues.

Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-16 Thread Borja Marcos

On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:

> You can do thinks in /boot/loader.conf to hard code bus and drive
> assignments.  
> 
> e.g.
> 
> hint.da.0.at="scbus0"
> hint.da.0.target="19"
> hint.da.0.unit="0"
> hint.da.1.at="scbus0"
> hint.da.1.target="18"
> hint.da.1.unit="0"

Beware, the targer number assignment is not predictable. There's no guarantee 
especially if you replace
a disk.





Borja.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LSI SAS2008 mps driver preferred firmware version

2015-11-14 Thread Kai Gallasch
On 12.11.2015 23:20 Royce Williams wrote:
> Firmware should match driver, e.g.:
> 
> mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
> 
> 
> Some of this may help -- not yet updated for 10.2, but may still be useful:
> 
> http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html

Thanks! Lots of information about reflashing the 9211-8i.
So I upgraded the old firmare of the controller from

mps0: Firmware: 05.00.17.00, Driver: 20.00.00.00-fbsd
to mps0: Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd
(FreeBSD 10.2)

As I understand it the firmware 20.00.00.00 was pulled by avago and
replaced with the fixed version 20.00.04.00

I will give feedback if I notice any problems with this FW version.

As a side note: Flashing the 9211-8i to the new firmware version changed
the way FreeBSD orders the disk devices on this server:

With the old firmware it looked like this:

root@:~ # camcontrol devlist
 at scbus0 target 10 lun 0 (pass0,da0)
 at scbus0 target 11 lun 0 (pass1,da1)
 at scbus0 target 12 lun 0 (pass2,da2)
 at scbus0 target 13 lun 0 (pass3,da3)
 at scbus0 target 14 lun 0 (pass4,da4)
 at scbus0 target 15 lun 0 (pass5,da5)
 at scbus0 target 16 lun 0 (pass6,da6)
 at scbus0 target 17 lun 0 (pass7,da7)
 at scbus0 target 18 lun 0 (pass8,da8)
 at scbus0 target 19 lun 0 (pass9,da9)
 at scbus0 target 20 lun 0 (pass10,da10)
 at scbus0 target 21 lun 0 (pass11,da11)
 at scbus0 target 22 lun 0 (pass12,ses0)
 at scbus7 target 0 lun 0 (pass13,ses1)

The order is according to the order the disks are placed in the drive
bays: (da0, bay1; da1, bay2, ..)


With the new firmware it now looks like this:

 at scbus0 target 8 lun 0 (pass0,da0)
 at scbus0 target 9 lun 0 (pass1,da1)
 at scbus0 target 10 lun 0 (pass2,da2)
 at scbus0 target 11 lun 0 (pass3,da3)
 at scbus0 target 12 lun 0 (pass4,da4)
 at scbus0 target 13 lun 0 (pass5,da5)
 at scbus0 target 14 lun 0 (pass6,da6)
 at scbus0 target 15 lun 0 (pass7,da7)
 at scbus0 target 16 lun 0 (pass8,da8)
 at scbus0 target 17 lun 0 (pass9,da9)
 at scbus0 target 18 lun 0 (pass10,da10)
 at scbus0 target 19 lun 0 (pass11,da11)
 at scbus0 target 20 lun 0 (pass12,ses0)
 at scbus7 target 0 lun 0 (pass13,ses1)

So now the drive stuck in the last drive bay is seen as da0 and the
drive in the first drive bay as da11

But: In the controller BIOS the scan order of the drives did not change
at all with the new firmware! So the change is only in the way FreeBSD
sees the drives.

My explanation for this change in drive ordering is, that my 9211-8i is
a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server. So
maybe the original firmware contained some adaptations for this server,
that are missing in the new firmware.

Can the way FreeBSD orders scanned SAS drives be changed? If not, no
problem, as I use partition labels for my zfs pools and the disks are
also labeled on the server as well.

Regards,
Kai.

-- 
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..





signature.asc
Description: OpenPGP digital signature


Re: LSI SAS2008 mps driver preferred firmware version

2015-11-14 Thread Gary Palmer
On Sat, Nov 14, 2015 at 01:18:14PM +0100, Kai Gallasch wrote:
> On 12.11.2015 23:20 Royce Williams wrote:
> > Firmware should match driver, e.g.:
> > 
> > mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
> > 
> > 
> > Some of this may help -- not yet updated for 10.2, but may still be useful:
> > 
> > http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html
> 
> Thanks! Lots of information about reflashing the 9211-8i.
> So I upgraded the old firmare of the controller from
> 
> mps0: Firmware: 05.00.17.00, Driver: 20.00.00.00-fbsd
> to mps0: Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd
> (FreeBSD 10.2)
> 
> As I understand it the firmware 20.00.00.00 was pulled by avago and
> replaced with the fixed version 20.00.04.00
> 
> I will give feedback if I notice any problems with this FW version.
> 
> As a side note: Flashing the 9211-8i to the new firmware version changed
> the way FreeBSD orders the disk devices on this server:
> 
> With the old firmware it looked like this:
> 
> root@:~ # camcontrol devlist
>  at scbus0 target 10 lun 0 (pass0,da0)
>  at scbus0 target 11 lun 0 (pass1,da1)
>  at scbus0 target 12 lun 0 (pass2,da2)
>  at scbus0 target 13 lun 0 (pass3,da3)
>  at scbus0 target 14 lun 0 (pass4,da4)
>  at scbus0 target 15 lun 0 (pass5,da5)
>  at scbus0 target 16 lun 0 (pass6,da6)
>  at scbus0 target 17 lun 0 (pass7,da7)
>  at scbus0 target 18 lun 0 (pass8,da8)
>  at scbus0 target 19 lun 0 (pass9,da9)
>  at scbus0 target 20 lun 0 (pass10,da10)
>  at scbus0 target 21 lun 0 (pass11,da11)
>  at scbus0 target 22 lun 0 (pass12,ses0)
>  at scbus7 target 0 lun 0 (pass13,ses1)
> 
> The order is according to the order the disks are placed in the drive
> bays: (da0, bay1; da1, bay2, ..)
> 
> 
> With the new firmware it now looks like this:
> 
>  at scbus0 target 8 lun 0 (pass0,da0)
>  at scbus0 target 9 lun 0 (pass1,da1)
>  at scbus0 target 10 lun 0 (pass2,da2)
>  at scbus0 target 11 lun 0 (pass3,da3)
>  at scbus0 target 12 lun 0 (pass4,da4)
>  at scbus0 target 13 lun 0 (pass5,da5)
>  at scbus0 target 14 lun 0 (pass6,da6)
>  at scbus0 target 15 lun 0 (pass7,da7)
>  at scbus0 target 16 lun 0 (pass8,da8)
>  at scbus0 target 17 lun 0 (pass9,da9)
>  at scbus0 target 18 lun 0 (pass10,da10)
>  at scbus0 target 19 lun 0 (pass11,da11)
>  at scbus0 target 20 lun 0 (pass12,ses0)
>  at scbus7 target 0 lun 0 (pass13,ses1)
> 
> So now the drive stuck in the last drive bay is seen as da0 and the
> drive in the first drive bay as da11
> 
> But: In the controller BIOS the scan order of the drives did not change
> at all with the new firmware! So the change is only in the way FreeBSD
> sees the drives.
> 
> My explanation for this change in drive ordering is, that my 9211-8i is
> a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server. So
> maybe the original firmware contained some adaptations for this server,
> that are missing in the new firmware.
> 
> Can the way FreeBSD orders scanned SAS drives be changed? If not, no
> problem, as I use partition labels for my zfs pools and the disks are
> also labeled on the server as well.

You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.  

e.g.

hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"

See scsi(4) or cam(4) for more hints.

You're probably better off using GPT labels though, as they will
survive any future disk order changes. The fact the target numbers 
changed means that loader.conf changes will fix the current issue
but may not work properly after any future firmware updates.

Gary
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: LSI SAS2008 mps driver preferred firmware version

2015-11-14 Thread Stephen Mcconnell via freebsd-stable
> -Original Message-
> From: owner-freebsd-s...@freebsd.org [mailto:owner-freebsd-
> s...@freebsd.org] On Behalf Of Gary Palmer
> Sent: Saturday, November 14, 2015 7:31 AM
> To: Kai Gallasch
> Cc: freebsd-s...@freebsd.org; Royce Williams; freebsd-stable
> Subject: Re: LSI SAS2008 mps driver preferred firmware version
>
> On Sat, Nov 14, 2015 at 01:18:14PM +0100, Kai Gallasch wrote:
> > On 12.11.2015 23:20 Royce Williams wrote:
> > > Firmware should match driver, e.g.:
> > >
> > > mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
> > >
> > >
> > > Some of this may help -- not yet updated for 10.2, but may still be
useful:
> > >
> > > http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-fir
> > > mware.html
> >
> > Thanks! Lots of information about reflashing the 9211-8i.
> > So I upgraded the old firmare of the controller from
> >
> > mps0: Firmware: 05.00.17.00, Driver: 20.00.00.00-fbsd to mps0:
> > Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd (FreeBSD 10.2)
> >
> > As I understand it the firmware 20.00.00.00 was pulled by avago and
> > replaced with the fixed version 20.00.04.00
> >
> > I will give feedback if I notice any problems with this FW version.
> >
> > As a side note: Flashing the 9211-8i to the new firmware version
> > changed the way FreeBSD orders the disk devices on this server:
> >
> > With the old firmware it looked like this:
> >
> > root@:~ # camcontrol devlist
> >  at scbus0 target 10 lun 0 (pass0,da0)
> >  at scbus0 target 11 lun 0 (pass1,da1)
> >  at scbus0 target 12 lun 0 (pass2,da2)  > INTEL SSDSC2BA10 0270> at scbus0 target 13 lun 0 (pass3,da3)  > WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass4,da4)  > WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass5,da5)  > WD2001FYYG-01SL3 VR08> at scbus0 target 16 lun 0 (pass6,da6)  > WD2001FYYG-01SL3 VR08> at scbus0 target 17 lun 0 (pass7,da7)  > WD2001FYYG-01SL3 VR08> at scbus0 target 18 lun 0 (pass8,da8)  > WD2001FYYG-01SL3 VR08> at scbus0 target 19 lun 0 (pass9,da9)  > WD2001FYYG-01SL3 VR08> at scbus0 target 20 lun 0 (pass10,da10)  > WD2001FYYG-01SL3 VR08> at scbus0 target 21 lun 0 (pass11,da11)  > HYDE12 0341> at scbus0 target 22 lun 0 (pass12,ses0)  > Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)
> >
> > The order is according to the order the disks are placed in the drive
> > bays: (da0, bay1; da1, bay2, ..)
> >
> >
> > With the new firmware it now looks like this:
> >
> >  at scbus0 target 8 lun 0 (pass0,da0)  > WD2001FYYG-01SL3 VR08> at scbus0 target 9 lun 0 (pass1,da1)  > WD2001FYYG-01SL3 VR08> at scbus0 target 10 lun 0 (pass2,da2)  > WD2001FYYG-01SL3 VR08> at scbus0 target 11 lun 0 (pass3,da3)  > WD2001FYYG-01SL3 VR08> at scbus0 target 12 lun 0 (pass4,da4)  > WD2001FYYG-01SL3 VR08> at scbus0 target 13 lun 0 (pass5,da5)  > WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass6,da6)  > WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass7,da7)  > INTEL SSDSC2BA10 0270> at scbus0 target 16 lun 0 (pass8,da8)  > INTEL SSDSC2BA10 0270> at scbus0 target 17 lun 0 (pass9,da9)  > HUS156030VLS600 A5D0> at scbus0 target 18 lun 0 (pass10,da10)  > HUS156030VLS600 A760> at scbus0 target 19 lun 0 (pass11,da11)  > HYDE12 0341> at scbus0 target 20 lun 0 (pass12,ses0)  > Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)
> >
> > So now the drive stuck in the last drive bay is seen as da0 and the
> > drive in the first drive bay as da11
> >
> > But: In the controller BIOS the scan order of the drives did not
> > change at all with the new firmware! So the change is only in the way
> > FreeBSD sees the drives.
> >
> > My explanation for this change in drive ordering is, that my 9211-8i
> > is a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server.
> > So maybe the original firmware contained some adaptations for this
> > server, that are missing in the new firmware.
> >
> > Can the way FreeBSD orders scanned SAS drives be changed? If not, no
> > problem, as I use partition labels for my zfs pools and the disks are
> > also labeled on the server as well.
>
> You can do thinks in /boot/loader.conf to hard code bus and drive
> assignments.
>
> e.g.
>
> hint.da.0.at="scbus0"
> hint.da.0.target="19"
> hint.da.0.unit="0"
> hint.da.1.at="scbus0"
> hint.da.1.target="18"
> hint.da.1.unit="0"
>
> See scsi(4) or cam(4) for more hints.
>
> You're probably better 

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-14 Thread Slawa Olhovchenkov
On Sat, Nov 14, 2015 at 01:18:14PM +0100, Kai Gallasch wrote:

> So now the drive stuck in the last drive bay is seen as da0 and the
> drive in the first drive bay as da11
> 
> But: In the controller BIOS the scan order of the drives did not change
> at all with the new firmware! So the change is only in the way FreeBSD
> sees the drives.

For ZFS this is not mater.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LSI SAS2008 mps driver preferred firmware version

2015-11-12 Thread Royce Williams
Firmware should match driver, e.g.:

mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs


Some of this may help -- not yet updated for 10.2, but may still be useful:

http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html

Royce

On Thu, Nov 12, 2015 at 12:05 PM, Kai Gallasch  wrote:
>
> Hi.
>
> I'm currently building a new ZFS based FreeBSD 10.2 server with a
> SAS/SATA HBA SAS9211-8i.
>
> Is there a preferred or recommended firmware version for Fusion-MPT
> SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does not
> give any information about this.
>
> The current version of my SAS9211-8i is:
> v7.05.05.00 (2010.05.19), BIOS
> 5.00.17.00-IR, FW
>
>
> IR vs. IT firmware:
>
> Are there any advantages replacing the -IR (integrated raid) firmware on
> the LSI controller with an -IT (target mode) version, if the RAID
> functionality of the HBA is not used at all?
>
> There were some claims that running the -IR version in a ZFS JBOD setup
> would result in a small performance penalty compared to -IT and that
> there was a risk that a controller running the -IR firmware version
> could potentially damage ZFS data on a disk by putting RAID metadata
> somewhere on the drive, even if not using the RAID feature of the card!
>
> I'd appreciate it if someone could shed some light on this.
>
> Regards,
> Kai.
>
> --
> PGP-KeyID = 0x70654D7C4FB1F588
> One day a lemming will fly..
>
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


LSI SAS2008 mps driver preferred firmware version

2015-11-12 Thread Kai Gallasch

Hi.

I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.

Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does not
give any information about this.

The current version of my SAS9211-8i is:
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW


IR vs. IT firmware:

Are there any advantages replacing the -IR (integrated raid) firmware on
the LSI controller with an -IT (target mode) version, if the RAID
functionality of the HBA is not used at all?

There were some claims that running the -IR version in a ZFS JBOD setup
would result in a small performance penalty compared to -IT and that
there was a risk that a controller running the -IR firmware version
could potentially damage ZFS data on a disk by putting RAID metadata
somewhere on the drive, even if not using the RAID feature of the card!

I'd appreciate it if someone could shed some light on this.

Regards,
Kai.

-- 
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..





signature.asc
Description: OpenPGP digital signature


RE: LSI SAS2008 mps driver preferred firmware version

2015-11-12 Thread Stephen Mcconnell via freebsd-stable
> -Original Message-
> From: owner-freebsd-s...@freebsd.org [mailto:owner-freebsd-
> s...@freebsd.org] On Behalf Of Royce Williams
> Sent: Thursday, November 12, 2015 3:21 PM
> To: Kai Gallasch
> Cc: freebsd-s...@freebsd.org; freebsd-stable
> Subject: Re: LSI SAS2008 mps driver preferred firmware version
>
> Firmware should match driver, e.g.:
>
> mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs

I've never heard of any problems when these are mismatched, so I'm not
sure why FreeNAS would complain.  Anyway, you should use the latest of
both in my opinion.
The latest FW on the avagotech website is 20.00.04.00.  I have heard that
some FreeBSD users have had some problems with the PH19 FW.

Steve McConnell

>
>
> Some of this may help -- not yet updated for 10.2, but may still be
useful:
>
> http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-
> firmware.html
>
> Royce
>
> On Thu, Nov 12, 2015 at 12:05 PM, Kai Gallasch <k...@free.de> wrote:
> >
> > Hi.
> >
> > I'm currently building a new ZFS based FreeBSD 10.2 server with a
> > SAS/SATA HBA SAS9211-8i.
> >
> > Is there a preferred or recommended firmware version for Fusion-MPT
> > SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does
> > not give any information about this.
> >
> > The current version of my SAS9211-8i is:
> > v7.05.05.00 (2010.05.19), BIOS
> > 5.00.17.00-IR, FW
> >
> >
> > IR vs. IT firmware:
> >
> > Are there any advantages replacing the -IR (integrated raid) firmware
> > on the LSI controller with an -IT (target mode) version, if the RAID
> > functionality of the HBA is not used at all?
> >
> > There were some claims that running the -IR version in a ZFS JBOD
> > setup would result in a small performance penalty compared to -IT and
> > that there was a risk that a controller running the -IR firmware
> > version could potentially damage ZFS data on a disk by putting RAID
> > metadata somewhere on the drive, even if not using the RAID feature of
the
> card!
> >
> > I'd appreciate it if someone could shed some light on this.
> >
> > Regards,
> > Kai.
> >
> > --
> > PGP-KeyID = 0x70654D7C4FB1F588
> > One day a lemming will fly..
> >
> >
> >
> ___
> freebsd-s...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
> To unsubscribe, send any mail to "freebsd-scsi-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: LSI SAS2008 mps driver preferred firmware version

2015-11-12 Thread Stephen Mcconnell via freebsd-stable
> -Original Message-
> From: Stephen Mcconnell [mailto:stephen.mcconn...@avagotech.com]
> Sent: Thursday, November 12, 2015 3:45 PM
> To: 'Royce Williams'; 'Kai Gallasch'
> Cc: 'freebsd-s...@freebsd.org'; 'freebsd-stable'
> Subject: RE: LSI SAS2008 mps driver preferred firmware version
>
>
>
> > -Original Message-
> > From: owner-freebsd-s...@freebsd.org [mailto:owner-freebsd-
> > s...@freebsd.org] On Behalf Of Royce Williams
> > Sent: Thursday, November 12, 2015 3:21 PM
> > To: Kai Gallasch
> > Cc: freebsd-s...@freebsd.org; freebsd-stable
> > Subject: Re: LSI SAS2008 mps driver preferred firmware version
> >
> > Firmware should match driver, e.g.:
> >
> > mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
>
> I've never heard of any problems when these are mismatched, so I'm not
sure
> why FreeNAS would complain.  Anyway, you should use the latest of both
in
> my opinion.
> The latest FW on the avagotech website is 20.00.04.00.  I have heard
that
> some FreeBSD users have had some problems with the PH19 FW.
>
> Steve McConnell
>
> >
> >
> > Some of this may help -- not yet updated for 10.2, but may still be
useful:
> >
> > http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-
> > firmware.html
> >
> > Royce
> >
> > On Thu, Nov 12, 2015 at 12:05 PM, Kai Gallasch <k...@free.de> wrote:
> > >
> > > Hi.
> > >
> > > I'm currently building a new ZFS based FreeBSD 10.2 server with a
> > > SAS/SATA HBA SAS9211-8i.
> > >
> > > Is there a preferred or recommended firmware version for Fusion-MPT
> > > SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does
> > > not give any information about this.
> > >
> > > The current version of my SAS9211-8i is:
> > > v7.05.05.00 (2010.05.19), BIOS
> > > 5.00.17.00-IR, FW
> > >
> > >
> > > IR vs. IT firmware:
> > >
> > > Are there any advantages replacing the -IR (integrated raid)
> > > firmware on the LSI controller with an -IT (target mode) version, if
> > > the RAID functionality of the HBA is not used at all?
> > >
> > > There were some claims that running the -IR version in a ZFS JBOD
> > > setup would result in a small performance penalty compared to -IT
> > > and that there was a risk that a controller running the -IR firmware
> > > version could potentially damage ZFS data on a disk by putting RAID
> > > metadata somewhere on the drive, even if not using the RAID feature
> > > of the
> > card!

And also, I asked someone who works on the FW about these IR concerns and
he says the only reason for a performance issue is that the IR FW is a bit
larger and therefore the command queue depth will be smaller due to the
amount of resources available, so it is possible to have a slight
performance degradation in some cases.  Other than that, once it is
determined that there are no IR drives the FW acts just like IT. AND there
is no data corruption issue for ZFS disks.  If there is, that would be bad
and a high priority defect would need to be filed :)  If there are no IR
volumes, the FW works just like IT so there would be no reason to write
metadata to a non-IR disk.  Even if there was a separate IR volume, the
ZFS disk would not be written with metadata because it's not part of an IR
volume.

Steve

> > >
> > > I'd appreciate it if someone could shed some light on this.
> > >
> > > Regards,
> > > Kai.
> > >
> > > --
> > > PGP-KeyID = 0x70654D7C4FB1F588
> > > One day a lemming will fly..
> > >
> > >
> > >
> > ___
> > freebsd-s...@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
> > To unsubscribe, send any mail to
"freebsd-scsi-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"