Re: [OpenAFS] AFS Performance / ZFS

2019-03-07 Thread Andreas Ladanyi
Hi Jeffrey,
>> Hi,
>>
>> iam testing a box with FreeNAS  (BSD) and ZFS. On this box i use
>> virtualized byhve guest as afs server.
>> [...]
>> Any ideas why afs speed is only about 25 MByte/s ? Maybe i have to
>> adjust another afs server parameter ?
> There are performance bottlenecks in the byhve network virtualization
> that severely impact RX throughput.  The weaknesses in the OpenAFS RX
> implementation related to flow control, congestion avoidance, and pacing
> exacerbate the throughput limitations.

Thats an important information.

Whats the experience with docker containers (instead of byhve) and OpenAFS ?


Andreas

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] AFS Performance / ZFS

2019-03-07 Thread Jeffrey Altman
On 3/7/2019 9:43 AM, Andreas Ladanyi wrote:
> Hi,
> 
> iam testing a box with FreeNAS  (BSD) and ZFS. On this box i use
> virtualized byhve guest as afs server.
> [...]
> Any ideas why afs speed is only about 25 MByte/s ? Maybe i have to
> adjust another afs server parameter ?

There are performance bottlenecks in the byhve network virtualization
that severely impact RX throughput.  The weaknesses in the OpenAFS RX
implementation related to flow control, congestion avoidance, and pacing
exacerbate the throughput limitations.

AuriStorFS customers use TrueNAS to back vice partitions but do so by
exporting the ZFS storage via iSCSI to RHEL7 systems connected to the
TrueNAS server with dedicated bonded 10-gbit NICs.  This combination is
reliable and is capable of filling the iSCSI path.

Jeffrey Altman
AuriStor, Inc.

<>

smime.p7s
Description: S/MIME Cryptographic Signature


[OpenAFS] AFS Performance / ZFS

2019-03-07 Thread Andreas Ladanyi
Hi,

iam testing a box with FreeNAS  (BSD) and ZFS. On this box i use
virtualized byhve guest as afs server.

The box includes SAS drives (12G/s) on HBA (12G/s). I created some vice
partitions for the afs server guest and connect them with ahci. If

For ZFS pool which contains the vice partitions:

- atime and  deduplication are off

- lz4 compression is on

The afs server parameters:

- udp size is set to 2MB and 8MB for test

- afs sync is set to "never", zfs sync is enabled

>From afs client (desktop box, 1GE) to virtual afs server guest there is
1GE ethernet connection. If i test this connection with iperf i get
nearly 1GE test data speed.

If it test on client side with dd and create a file in the afs path i
get about 25 MB/s (200 MBit/s) with memcache and disk cache.

if i dd with oflag=direct to the unmounted  afs vicepa partition device
(/dev/sdX) in the afs server guest system then i get about 1 GBbyte/s
which nears to 12 Gbit/s of SAS drive/ HBA speed.


Any ideas why afs speed is only about 25 MByte/s ? Maybe i have to
adjust another afs server parameter ?


Andreas

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info