Re: LACP with 3 interfaces.

2015-11-17 Thread Kurt Jaeger
Hi!

> We have a NFS server witch has three network ports.
> 
> We have bonded these interfaces as a lagg interface, but when we use the
> server it looks like only two interfaces are used.
> 
> This is our rc.conf file
> 
> ifconfig_igb0="up"
> ifconfig_igb1="up"
> ifconfig_igb2="up"
> cloned_interfaces="lagg0"
> ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 laggport igb2
> 192.168.100.222 netmask 255.255.255.0"

This says you are lagg'in igb0 to igb2.

> ifconfig tell us the following.
[...]
> laggport: igb1 flags=1c
> laggport: igb2 flags=1c
> laggport: igb3 flags=1c

This says it's lagg'in igb1 to igb3 ? Why the difference ?

-- 
p...@opsec.eu+49 171 3101372 5 years to go !
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LACP with 3 interfaces.

2015-11-17 Thread Alan Somers
On Tue, Nov 17, 2015 at 8:26 AM, Johan Hendriks  wrote:

> Is there something we can do to make sure lagg0 uses all the interfaces.

Nope.  LACP doesn't actively load balance its interfaces.  Each flow
gets assigned to a single interface based on a hash of the source and
destination MACs, IP addresses, and TCP/UDP ports.  With many clients,
all interfaces will probably be used, but with few clients, there's a
lot of luck involved.  If you want more bandwidth, you can try
fiddling with IP addresses and port numbers to influence the hash
function, but even if you get it to distribute the way you want, all
your work may be undone by a reboot.  The best option is to buy a
10Gbps NIC for the server.  They aren't too expensive, anymore, though
the switches are still pricey.  A cheaper option, if you'll only ever
have 4 clients, is to discard the lagg and assign a separate IP
address to each igb port, then manually distribute those addresses
amongst your clients.  If you do this, you unfortunately won't gain
the reliability features of LACP.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LACP with 3 interfaces.

2015-11-17 Thread Steven Hartland



On 17/11/2015 15:26, Johan Hendriks wrote:

Hello all

We have a NFS server witch has three network ports.

We have bonded these interfaces as a lagg interface, but when we use the
server it looks like only two interfaces are used.

This is our rc.conf file

ifconfig_igb0="up"
ifconfig_igb1="up"
ifconfig_igb2="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 laggport igb2
192.168.100.222 netmask 255.255.255.0"

ifconfig tell us the following.

lagg0: flags=8843 metric 0 mtu 1500

options=403bb

 ether a0:36:9f:7d:fc:2f
 inet 192.168.100.222 netmask 0xff00 broadcast 192.168.100.255
 nd6 options=29
 media: Ethernet autoselect
 status: active
 laggproto lacp lagghash l2,l3,l4
 laggport: igb1 flags=1c
 laggport: igb2 flags=1c
 laggport: igb3 flags=1c

This shows that your server is using l2,l3,l4 hashing for lacp but what 
options have you configured on the switch?

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LACP with 3 interfaces.

2015-11-17 Thread Alan Somers
On Tue, Nov 17, 2015 at 9:16 AM, Kurt Jaeger  wrote:
> Hi!
>
>> > Is there something we can do to make sure lagg0 uses all the interfaces.
>>
>> Nope.  LACP doesn't actively load balance its interfaces.
>
> On FreeBSD 11
>
> man lagg(4)
>
> says:
>
> The driver currently supports the aggregation protocols failover (the
> default), lacp, loadbalance, roundrobin, broadcast, and none.
>
> with
>
>  roundrobin   Distributes outgoing traffic using a round-robin scheduler
>   through all active ports and accepts incoming traffic from
>   any active port.
>
> If the three ports are needed for sending, shouldn't this work ?
>

Be careful with roundrobin or loadbalance.  Both of them will
distribute outbound traffic across all ports, but at the expense of
causing your NFS clients to receive out-of-order TCP packets.  This
increases their CPU load.  You may find that performance with
roundrobin is actually worse than with LACP because of the
out-of-order issue.  Also, neither roundrobin nor loadbalance will
help distribute inbound traffic.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread Freddie Cash
On Tue, Nov 17, 2015 at 12:08 AM, Patrick M. Hausen  wrote:

> Hi, all,
>
> > Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> >
> > ​You label the disks as they are added to the system the first time.
> That
> > way, you always know where each disk is located, and you only deal with
> the
> > labels.
>
> we do the same for obvious reasons. But I always wonder about the possible
> downsides, because ZFS documentation explicitly states:
>
> ZFS operates on raw devices, so it is possible to create a storage
> pool comprised of logical
> volumes, either software or hardware. This configuration is not
> recommended, as ZFS works
> best when it uses raw physical devices. Using logical volumes
> might sacrifice performance,
> reliability, or both, and should be avoided.
>
> (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
>
> Can anyone shed some lght on why not using raw devices might sacrifice
> performance or reliability? Or is this just outdated folklore?
>

​On Solaris, using raw devices allows ZFS to enable the caches on the disks
themselves, while using any kind of partitioning on the disk forces the
caches to be disabled.

This is not an issue on FreeBSD due to the way GEOM works.  Caches on disks
are enabled regardless of how the disk is accessed (raw, dd-partitioned,
MBR-partitioned, GPT-partitioned, gnop, geli, whatever).

This is a common misconception and FAQ with ZFS on FreeBSD and one reason
to not take any Sun/Oracle documentation at face value, as it doesn't
always apply to FreeBSD.

There were several posts from pjd@ about this back in the 7.x days when ZFS
was first imported to FreeBSD.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LACP with 3 interfaces.

2015-11-17 Thread Kurt Jaeger
Hi!

> > Is there something we can do to make sure lagg0 uses all the interfaces.
> 
> Nope.  LACP doesn't actively load balance its interfaces.

On FreeBSD 11

man lagg(4)

says:

The driver currently supports the aggregation protocols failover (the
default), lacp, loadbalance, roundrobin, broadcast, and none.

with

 roundrobin   Distributes outgoing traffic using a round-robin scheduler
  through all active ports and accepts incoming traffic from
  any active port.

If the three ports are needed for sending, shouldn't this work ?

-- 
p...@opsec.eu+49 171 3101372 5 years to go !
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


LACP with 3 interfaces.

2015-11-17 Thread Johan Hendriks
Hello all

We have a NFS server witch has three network ports.

We have bonded these interfaces as a lagg interface, but when we use the
server it looks like only two interfaces are used.

This is our rc.conf file

ifconfig_igb0="up"
ifconfig_igb1="up"
ifconfig_igb2="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 laggport igb2
192.168.100.222 netmask 255.255.255.0"

ifconfig tell us the following.

lagg0: flags=8843 metric 0 mtu 1500
   
options=403bb
ether a0:36:9f:7d:fc:2f
inet 192.168.100.222 netmask 0xff00 broadcast 192.168.100.255
nd6 options=29
media: Ethernet autoselect
status: active
laggproto lacp lagghash l2,l3,l4
laggport: igb1 flags=1c
laggport: igb2 flags=1c
laggport: igb3 flags=1c


So all looks fine
But with 4 machines putting a 1GB file on the server one is always using
full wirespeed, the rest is around 30 / 40 MB. It never uses all three
intefaces but two at max.
So we are topped at 200MB/s where we were expecting around 300MB/s

#systat -if shows this also.

It shows two interfaces at work where igb1 is sitting and doing nothing?

/0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
 Load Average   |

  Interface   Traffic   PeakTotal
  lagg1  in  0.253 KB/s  0.683 KB/s  459.781 KB
 out 0.000 KB/s  0.000 KB/s0.000 KB

  lagg0  in  0.289 KB/s215.439 MB/s4.113 GB
 out 0.091 KB/s114.269 MB/s2.061 GB

lo0  in  0.000 KB/s  0.068 KB/s0.770 KB
 out 0.000 KB/s  0.068 KB/s0.770 KB

   igb2  in  0.011 KB/s *98.401 MB/s*   
1.039 GB
 out 0.022 KB/s  1.474 MB/s   27.311 MB

   igb1  in  0.143 KB/s  *0.466 KB/s* 
192.422 KB
 out 0.022 KB/s  1.959 MB/s   27.066 MB

   igb0  in  0.135 KB/s*117.340 MB/s *  
3.074 GB
 out 0.114 KB/s112.679 MB/s2.007 GB


Is there something we can do to make sure lagg0 uses all the interfaces.

regards
Johan



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


After BIOS-Upgrade, I can't (UEFI-) boot anymore

2015-11-17 Thread Rainer Duffner
Hi,

I have a HP DL380 G9, that I boot from an internal SmartArray Controller 
provided RAID1 into FreeBSD 10.1 amd64.

I have upgraded the BIOS to the 2015-10 release and on reboot, I now get a 
message that

/boot/loader.efi 

can’t be found.

I can legacy boot it into mfsbsd and the file is there.

How can I fix this?
Or how can I debug this and why is this failing in the first place?



Rainer

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Bug 204641 - 10.2 UNMAP/TRIM not available on a zfs zpool that uses iSCSI disks, backed on a zpool file target

2015-11-17 Thread Steven Hartland



On 17/11/2015 22:08, Christopher Forgeron wrote:

I just submitted this as a bug:

( https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204641 )

..but I thought I should bring it to the list's attention for more exposure
- If that's a no-no, let me know, as I have a few others that are related
to this that I'd like to discuss.

- - - -


Consider this scenario:

Virtual FreeBSD Machine, with a zpool created out of iSCSI disks.
Physical FreeBSD Machine, with a zpool holding a sparse file that is the
target for the iSCSI disk.

This setup works in an environment with all 10.1 machines, doesn't with all
10.2 machines.

- The 10.2 Machines are 10.2-p7 RELEASE, updated via freebsd-update, no
custom.
- The 10.1 Machine are 10.1-p24 RELEASE, updated via freebsd-update, no
custom.
- iSCSI is all CAM iSCSI, not the old istgt platform.
- The iSCSI Target is a sparse file, stored on a zpool (not a vdev Target)

The target machine is the same physical machine, with the same zpools - I
either boot 10.1 or 10.2 for testing, and use the same zpool/disks

to ensure nothing is changing.

If I have a 10.2 iSCSI Initiator (client) connected to a 10.2 iSCSI Target,
TRIM doesn't work (shows as NONE below).
If I have a 10.2 iSCSI Initiator (client) connected to a 10.1 iSCSI Target,
TRIM does work.

(There is another bug with that last scenario as well, but I will open it
separately)

...for clarity, a 10.1 iSCSI Initiator connected to a 10.1 iSCSI Target
also works perfectly. I have ~20 of these in the field.

On the 10.1 / 10.2 Targets, the ctl.conf file is identical. Zpools are
identical, because they are shared between reboots of the same iSCSI

target machine.



On the 10.2 initiator machine, connected to a 10.2 Target machine:

# sysctl -a | grep cam.da

kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 131072
kern.cam.da.2.delete_method: NONE
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 131072
kern.cam.da.1.delete_method: NONE
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE

Note the delete_method is NONE


# sysctl -a | grep trim
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
vfs.zfs.trim.enabled: 1
vfs.zfs.vdev.trim_max_pending: 1
vfs.zfs.vdev.trim_max_active: 64
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_on_init: 1
kstat.zfs.misc.zio_trim.failed: 0
kstat.zfs.misc.zio_trim.unsupported: 181
kstat.zfs.misc.zio_trim.success: 0
kstat.zfs.misc.zio_trim.bytes: 0

Note no trimmed bytes.


On the target machine, 10.1 and 10.2 share the same config file:
/etc/ctl.conf

portal-group pg0 {
 discovery-auth-group no-authentication
 listen 0.0.0.0
 listen [::]
}

 lun 0 {
 path /pool92/iscsi/iscsi.zvol
 blocksize 4K
 size 5T
 option unmap "on"
 option scsiname "pool92"
 option vendor "pool92"
 option insecure_tpc "on"
 }
}


target iqn.iscsi1.zvol {
 auth-group no-authentication
 portal-group pg0

 lun 0 {
 path /pool92_1/iscsi/iscsi.zvol
 blocksize 4K
 size 5T
 option unmap "on"
 option scsiname "pool92_1"
 option vendor "pool92_1"
 option insecure_tpc "on"
 }
}


When I boot a 10.1 Target server, the 10.2 initiator connects, and we do
see proper UNMAP ability:


kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 5497558138880
kern.cam.da.2.delete_method: UNMAP
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 5497558138880
kern.cam.da.1.delete_method: UNMAP
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE


Please let me know what you'd like to know next.

Having a quick flick through the code it looks like umap is now only 
supported on dev backed and not file backed.


I believe the following commit is the cause:
https://svnweb.freebsd.org/base?view=revision=279005

This was an MFC of:
https://svnweb.freebsd.org/base?view=revision=278672

I'm guessing this was an unintentional side effect mav?

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LACP with 3 interfaces.

2015-11-17 Thread Johan Hendriks


Op 17/11/15 om 17:15 schreef Steven Hartland:
>
>
> On 17/11/2015 15:26, Johan Hendriks wrote:
>> Hello all
>>
>> We have a NFS server witch has three network ports.
>>
>> We have bonded these interfaces as a lagg interface, but when we use the
>> server it looks like only two interfaces are used.
>>
>> This is our rc.conf file
>>
>> ifconfig_igb0="up"
>> ifconfig_igb1="up"
>> ifconfig_igb2="up"
>> cloned_interfaces="lagg0"
>> ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 laggport igb2
>> 192.168.100.222 netmask 255.255.255.0"
>>
>> ifconfig tell us the following.
>>
>> lagg0: flags=8843 metric 0
>> mtu 1500
>>
>> options=403bb
>>  ether a0:36:9f:7d:fc:2f
>>  inet 192.168.100.222 netmask 0xff00 broadcast 192.168.100.255
>>  nd6 options=29
>>  media: Ethernet autoselect
>>  status: active
>>  laggproto lacp lagghash l2,l3,l4
>>  laggport: igb1 flags=1c
>>  laggport: igb2 flags=1c
>>  laggport: igb3 flags=1c
>>
> This shows that your server is using l2,l3,l4 hashing for lacp but
> what options have you configured on the switch?
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
The switch is an old 3com switch for testing, and it is not very
configureable.
So It just says LACP and nothing else, I can choose between static and
dynamic.

The interfaces are copy paste, but the machine has a lot more
interfaces, and first I started to switch interfaces.
So the rc.conf is before I switched to other interfaces. That is why the
interfaces from ifconfig and the rc.conf file does not match.

Thank you all for your time.
I now know that it will use the inteface and that the hashing is the
reason it sticks to only two interfaces.
If it uses the mac adres, or ip adres to hash it can explain the choice
it makes.

Thank you all for your answers and time.

regards
Johan







___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread Patrick M. Hausen
Hi, all,

> Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> 
> ​You label the disks as they are added to the system the first time.  That
> way, you always know where each disk is located, and you only deal with the
> labels.

we do the same for obvious reasons. But I always wonder about the possible
downsides, because ZFS documentation explicitly states:

ZFS operates on raw devices, so it is possible to create a storage pool 
comprised of logical
volumes, either software or hardware. This configuration is not 
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes might 
sacrifice performance,
reliability, or both, and should be avoided.

(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)

Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?

Thanks,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)

2015-11-17 Thread krad
From what i remember its a  control thing. If you have another layer below
zfs, be it software based or hardware based, zfs cant be sure what is going
on, therefore cant guarantee anything. This is quite a big thing when it
comes to data integrity which is a big reason to use zfs. I remember having
to be very careful with some external caching arrays and making sure that
they flushed correctly as often they ignore the scsi flush commands. This
is one reason why I would always use the IT based firmware rather then the
RAID one, as its less likely to lead to issues.

On 17 November 2015 at 08:08, Patrick M. Hausen  wrote:

> Hi, all,
>
> > Am 16.11.2015 um 22:19 schrieb Freddie Cash :
> >
> > ​You label the disks as they are added to the system the first time.
> That
> > way, you always know where each disk is located, and you only deal with
> the
> > labels.
>
> we do the same for obvious reasons. But I always wonder about the possible
> downsides, because ZFS documentation explicitly states:
>
> ZFS operates on raw devices, so it is possible to create a storage
> pool comprised of logical
> volumes, either software or hardware. This configuration is not
> recommended, as ZFS works
> best when it uses raw physical devices. Using logical volumes
> might sacrifice performance,
> reliability, or both, and should be avoided.
>
> (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
>
> Can anyone shed some lght on why not using raw devices might sacrifice
> performance or reliability? Or is this just outdated folklore?
>
> Thanks,
> Patrick
> --
> punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
> Tel. 0721 9109 0 * Fax 0721 9109 100
> i...@punkt.de   http://www.punkt.de
> Gf: Jürgen Egeling  AG Mannheim 108285
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: ZFS on labelled partitions

2015-11-17 Thread Miroslav Lachman

Patrick M. Hausen wrote on 11/17/2015 09:08:

Hi, all,


Am 16.11.2015 um 22:19 schrieb Freddie Cash :

​You label the disks as they are added to the system the first time.  That
way, you always know where each disk is located, and you only deal with the
labels.


we do the same for obvious reasons. But I always wonder about the possible
downsides, because ZFS documentation explicitly states:

ZFS operates on raw devices, so it is possible to create a storage pool 
comprised of logical
volumes, either software or hardware. This configuration is not 
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes might 
sacrifice performance,
reliability, or both, and should be avoided.

(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)

Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?


It was on Solaris but not on FreeBSD. If you were using partitions on 
Solaris the drive cache was disabled (or something like that, I am not 
100% sure)


Miroslav Lachman

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: ZFS on labelled partitions

2015-11-17 Thread krad
It was a control thing again, if you were using a partition another
application could be using the drive on another partition, therefore zfs
couldn't guarantee exclusive use of the disk so had to be more careful in
the way it operated the drive. I think this meant I went into write through
mode like you say.

On 17 November 2015 at 08:22, Miroslav Lachman <000.f...@quip.cz> wrote:

> Patrick M. Hausen wrote on 11/17/2015 09:08:
>
>> Hi, all,
>>
>> Am 16.11.2015 um 22:19 schrieb Freddie Cash :
>>>
>>> ​You label the disks as they are added to the system the first time.
>>> That
>>> way, you always know where each disk is located, and you only deal with
>>> the
>>> labels.
>>>
>>
>> we do the same for obvious reasons. But I always wonder about the possible
>> downsides, because ZFS documentation explicitly states:
>>
>> ZFS operates on raw devices, so it is possible to create a
>> storage pool comprised of logical
>> volumes, either software or hardware. This configuration is not
>> recommended, as ZFS works
>> best when it uses raw physical devices. Using logical volumes
>> might sacrifice performance,
>> reliability, or both, and should be avoided.
>>
>> (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
>>
>> Can anyone shed some lght on why not using raw devices might sacrifice
>> performance or reliability? Or is this just outdated folklore?
>>
>
> It was on Solaris but not on FreeBSD. If you were using partitions on
> Solaris the drive cache was disabled (or something like that, I am not 100%
> sure)
>
> Miroslav Lachman
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: LSI SAS2008 mps driver preferred firmware version

2015-11-17 Thread krad
I disagree, get the remote hands to copy the serial number to an easily
visible location on the drive when its in the enclosure. Then label the
drives with the serial number (or a compatible version of it). That way the
label is tied to the drive, and you dont have to rely on the remote hands
100%. Better still do the physical labelling yourself

On 16 November 2015 at 21:19, Freddie Cash  wrote:

> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov 
> wrote:
>
> > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman 
> > wrote:
> > > > As already mentioned, unless you are using zfs, use gpart to label
> you
> > file
> > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> > fstab.
> > > >
> > >
> > > ​Even if you are using ZFS, labelling the drives with the location of
> the
> > > disk in the system (enclosure, column, row, whatever) makes things so
> > much
> > > easier to work with when there are disk-related issues.
> > >
> > > Just create a single partition that covers the whole disk, label it,
> and
> > > use the label to create the vdevs in the pool.​
> >
> > Bad idea.
> > Re-placed disk in different bay don't relabel automaticly.
> >
>
> ​Did the original disk get labelled automatically?  No, you had to do that
> when you first started using it.  So, why would you expect a replaced disk
> to get labelled automatically?
>
> Offline the dead/dying disk.
> Physically remove the disk.
> Insert the new disk.
> Partition / label the new disk.
> "zfs replace" using the new label to get it into the pool.​
>
>
> > Other issuse where disk placed in bay some remotely hands in data
> > center -- I am relay don't know how disk distributed by bays.
> >
>
> ​You label the disks as they are added to the system the first time.  That
> way, you always know where each disk is located, and you only deal with the
> labels.
>
> Then, when you need to replace a disk (or ask someone in a remote location
> to replace it) it's a simple matter:  the label on the disk itself tells
> you where the disk is physically located.  And it doesn't change if the
> controller decides to change the direction it enumerates devices.
>
> Which is easier to tell someone in a remote location:
>   Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
> or
>   Replace the disk called da36?​
> ​or
>   Find the disk with serial number ?
> or
>   Replace the disk where the light is (hopefully) flashing (but I can't
> tell you which enclosure, front or back, or anything else like that)?
>
> The first one lets you know exactly where the disk is located physically.
>
> The second one just tells you the name of the device as determined by the
> OS, but doesn't tell you anything about where it is located.  And it can
> change with a kernel update, driver update, or firmware update!
>
> The third requires you to pull every disk in turn to read the serial number
> off the drive itself.
>
> In order for the second or third option to work, you'd have to write down
> the device names and/or serial numbers and stick that onto the drive bay
> itself.​
>
>
> > Best way for identify disk -- uses enclouse services.
> >
>
> ​Only if your enclosure services are actually working (or even enabled).
> I've yet to work on a box where that actually works (we custom-build our
> storage boxes using OTS hardware).
>
> Best way, IMO, is to use the physical location of the device as the actual
> device name itself.  That way, there's never any ambiguity at the physical
> layer, the driver layer, the OS layer, or the ZFS pool layer.​
>
>
> > I have many sites with ZFS on whole disk and some sites with ZFS on
> > GPT partition. ZFS on GPT more heavy for administration.
> >
>
> ​It's 1 extra step:  partition the drive, supplying the location of the
> drive as the label for the partition.
>
> Everything else works exactly the same.
>
> I used to do everything with whole drives and no labels.  Did that for
> about a month, until 2 separate drives on separate controllers died (in a
> 24-bay setup) and I couldn't figure out where they were located as a BIOS
> upgrade changed which controller loaded first.  And then I had to work on a
> server that someone else configured with direct-attach bays (24 cables)
> that were connected almost at random.
>
> Then I used glabel(8) to label the entire disk, and things were much
> better.  But that didn't always play well with 4K drives, and replacing
> drives that were the same size didn't always work as the number of sectors
> in each disk was different (ZFS plays better with this now).
>
> Then I started to GPT partition things, and life has been so much simpler.
> All the partitions are aligned to 1 MB, and I can manually set the size of
> the partition to work around different physical sector counts.  All the
> partitions are labelled using the physical location of the disk 

Bug 204641 - 10.2 UNMAP/TRIM not available on a zfs zpool that uses iSCSI disks, backed on a zpool file target

2015-11-17 Thread Christopher Forgeron
I just submitted this as a bug:

( https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204641 )

..but I thought I should bring it to the list's attention for more exposure
- If that's a no-no, let me know, as I have a few others that are related
to this that I'd like to discuss.

- - - -


Consider this scenario:

Virtual FreeBSD Machine, with a zpool created out of iSCSI disks.
Physical FreeBSD Machine, with a zpool holding a sparse file that is the
target for the iSCSI disk.

This setup works in an environment with all 10.1 machines, doesn't with all
10.2 machines.

- The 10.2 Machines are 10.2-p7 RELEASE, updated via freebsd-update, no
custom.
- The 10.1 Machine are 10.1-p24 RELEASE, updated via freebsd-update, no
custom.
- iSCSI is all CAM iSCSI, not the old istgt platform.
- The iSCSI Target is a sparse file, stored on a zpool (not a vdev Target)

The target machine is the same physical machine, with the same zpools - I
either boot 10.1 or 10.2 for testing, and use the same zpool/disks

to ensure nothing is changing.

If I have a 10.2 iSCSI Initiator (client) connected to a 10.2 iSCSI Target,
TRIM doesn't work (shows as NONE below).
If I have a 10.2 iSCSI Initiator (client) connected to a 10.1 iSCSI Target,
TRIM does work.

(There is another bug with that last scenario as well, but I will open it
separately)

...for clarity, a 10.1 iSCSI Initiator connected to a 10.1 iSCSI Target
also works perfectly. I have ~20 of these in the field.

On the 10.1 / 10.2 Targets, the ctl.conf file is identical. Zpools are
identical, because they are shared between reboots of the same iSCSI

target machine.



On the 10.2 initiator machine, connected to a 10.2 Target machine:

# sysctl -a | grep cam.da

kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 131072
kern.cam.da.2.delete_method: NONE
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 131072
kern.cam.da.1.delete_method: NONE
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE

Note the delete_method is NONE


# sysctl -a | grep trim
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
vfs.zfs.trim.enabled: 1
vfs.zfs.vdev.trim_max_pending: 1
vfs.zfs.vdev.trim_max_active: 64
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_on_init: 1
kstat.zfs.misc.zio_trim.failed: 0
kstat.zfs.misc.zio_trim.unsupported: 181
kstat.zfs.misc.zio_trim.success: 0
kstat.zfs.misc.zio_trim.bytes: 0

Note no trimmed bytes.


On the target machine, 10.1 and 10.2 share the same config file:
/etc/ctl.conf

portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}

lun 0 {
path /pool92/iscsi/iscsi.zvol
blocksize 4K
size 5T
option unmap "on"
option scsiname "pool92"
option vendor "pool92"
option insecure_tpc "on"
}
}


target iqn.iscsi1.zvol {
auth-group no-authentication
portal-group pg0

lun 0 {
path /pool92_1/iscsi/iscsi.zvol
blocksize 4K
size 5T
option unmap "on"
option scsiname "pool92_1"
option vendor "pool92_1"
option insecure_tpc "on"
}
}


When I boot a 10.1 Target server, the 10.2 initiator connects, and we do
see proper UNMAP ability:


kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 5497558138880
kern.cam.da.2.delete_method: UNMAP
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 5497558138880
kern.cam.da.1.delete_method: UNMAP
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE


Please let me know what you'd like to know next.

Thanks.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Some filesystem performance numbers

2015-11-17 Thread Christopher Forgeron
Sounds interesting. I'd love to see your results when you're ready to
share, or even the 'work in progress' if you want to share privately .



On Tue, Jan 13, 2015 at 6:30 PM, Garrett Wollman 
wrote:

> I recently bought a copy of the SPECsfs2014 benchmark, and I've been
> using it to test out our NFS server platform.  One scenario of
> interest to me is identifying where the limits are in terms of the
> local CAM/storage/filesystem implementation versus bottlenecks unique
> to the NFS server, and to that end I've been running the benchmark
> suite directly on the server's local disk.  (This is of course also
> the way you'd benchmark for shared-nothing container-based
> virtualization.)
>
> I have found a few interesting results on my test platform:
>
> 1) I can quantify the cost of using SHA256 vs. fletcher4 as the ZFS
> checksum algorithm.  On the VDA workload (essentially a simulated
> video streaming/recording application), my server can do about half as
> many "streams" with SHA256 as it can with fletcher4.
>
> 2) Both L2ARC and separate ZIL have small but measurable performance
> impacts.  I haven't examined the differences closely.
>
> 3) LZ4 compression also makes a small performance impact, but as
> advertised, less than LZJB for mostly-incompressible data.
>
> I hope to be able to present the actual benchmark results at some
> point, as well as some results for the other three workloads.
>
> -GAWollman
> ___
> freebsd...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"