Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-18 Thread Jim Klimov
17 сентября 2016 г. 2:37:12 CEST, Ergi Thanasko  
пишет:
>no that is even slower, just rsync over mounted  nfs, a multithreaded
>rsync does work have better performance.
>
>[cid:449A210A-451D-40E1-97EA-98BA5E7FA558@fios-router.home]
>
>On Sep 16, 2016, at 5:24 PM, Dale Ghent
>> wrote:
>
>Are you doing the rsync over ssh? You might want to look into using
>HPN-SSH:
>
>https://www.psc.edu/index.php/hpn-ssh
>
>/dale
>
>On Sep 16, 2016, at 1:43 PM, Ergi Thanasko 
>wrote:
>
>Hi all,
>We have a a few servers  conected via 10g nic  LACP, some of them have 
>4nic and some have 6nic in a link aggregation mode. We been moving a
>lot of data around and we are trying to get the maximum performance. I
>have seen zpool can deliver  2-3GB accumulated  throughput. Iperf does
>about 600-800MB/sec between those two servers.
>Given  the hardware that we have and the zpool performance,   we
>expected   to see some serious data transfer rates  however we only see
>around 200-300MB/sec average  using rsync or copy paste over NFS. 
>Standard MTU 1500 and nfs block size.  I want to ask the community what
>to do get some higher throughout and the application level. I hear ZFS
>send/receive   or ZFS shadow does work faster but it does snapshots.
>Out data (Terabytes) is constantly evolving   and we prefer something
>in the nature of rsync  but to utilize the network hardware.
>
>If Anyone has a hardware setup that can see 1GB/sec  throughput  and
>does not mind sharing?
>Any software  that  use multithreads  sessions to move data around  zfs
>friendly? We do not mind getting going with a commercial solution like
>camvault or veeam if they work.
>
>Thank you for your time
>
>
>
>
>___
>OmniOS-discuss mailing list
>OmniOS-discuss@lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
>
>
>
>
>___
>OmniOS-discuss mailing list
>OmniOS-discuss@lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss

Note that with rsync over nfs (or any seemingly local FS) your sender spawns a 
recipient process which must also read the existing data to see what 
differences rsync should transfer. So over your links you read the entirety of 
data from remote host, and send something back. Unless your other side has a 
pathologically weak cpu (think hobby ARM board), you gotta be faster off with 
real networked rsync (native or rsh/ssh etc.), where sender and recipient run 
on respective hosts and process the bulk of data locally.

Also NFS I/O by design defaults to synchronous writes, so forfeits 
write-caching (with ZFS you can have ZIL on SSD to mitigate that performance 
impact and keep data security guarantees); some other OSes pretend they are 
fast by ignoring the sync requirement, and then you have fun times after a 
crash, failover etc. of the NFS server side.

Jim
--
Typos courtesy of K-9 Mail on my Samsung Android
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-17 Thread Guenther Alka

hi
Intention of this test and tuning cycle was to check 4k video editing 
capability
from OSX and Windows to a Solaris or OmniOS storage over 10G/40G to SSD 
or NVMe storage.


With tunings and large 4k videofiles I was able to get about 900MB/s on 
write with peaks up to 1000 MB/s over SMB 2.1 tested with the video 
editing tool AJA on Windows and speed test on OSX. These are more or 
less sequential tests with a lot of large files. Values on Solaris were 
slightly better than on OmniOS so I asume OS or driver defaults are more 
optimized there at least regarding 10G/40G. Reads were always slower and 
more critical to settings and cablings. NFS values on OSX were not 
nearly as good and quite disappointing (at least on OSX).


With smaller NTSC/PAL video settings (test uses many small files then) 
performance went down to 500-600 MB/s with writes and a little lower on 
reads.


I currently do some tests with i40e on Intel XL710 where the difference 
is heavy with 2200MB write on Solaris and 1500 MB/s on OmniOS with same 
settings while reads are currently a disaster at least on Windows with 
up to 300 MB/s on Solaris and 150MB/s on OmniOS.


In best cases these values were nearly as good as iperf values so near 
wire speed.


I would not asume that you can reach high performance values with rsync. 
Zfs send should be faster as it creates a filestream. I woukd not expect 
that they can come close to the above even with zfs send over mbuffer or 
netcat. A pure copy should be as fast from/to OSX orWindows or with cp 
over netcat.


Gea


Am 17.09.2016 um 02:57 schrieb Ergi Thanasko:

HI Gea,
Great info, are you seeing 1000MB/s doing iperf or actually transfer rates  
rsync, cp bbcp….



On Sep 16, 2016, at 12:57 PM, Guenther Alka  wrote:

I have made some investigations into 10G and found that 300-400MB/s is expected 
with default settings. Improvements are possible up to 1000MB/s via mtu 9000 
and if you increase ip buffers ex
max_buf=4097152 tcp
send_buf=2048576 tcp
recv_buf=2048576 tcp,

NFS lockd servers (ex 1024), NFS number of threads (ex 64) and NFS transfer 
size (ex 1048576)

http://napp-it.org/doc/downloads/performance_smb2.pdf


Gea

Am 16.09.2016 um 19:43 schrieb Ergi Thanasko:

Hi all,
We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
and some have 6nic in a link aggregation mode. We been moving a lot of data 
around and we are trying to get the maximum performance. I have seen zpool can 
deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec between 
those two servers.
Given  the hardware that we have and the zpool performance,   we expected   to 
see some serious data transfer rates  however we only see around 200-300MB/sec 
average  using rsync or copy paste over NFS.  Standard MTU 1500 and nfs block 
size.  I want to ask the community what to do get some higher throughout and 
the application level. I hear ZFS send/receive   or ZFS shadow does work faster 
but it does snapshots. Out data (Terabytes) is constantly evolving   and we 
prefer something in the nature of rsync  but to utilize the network hardware.

If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not 
mind sharing?
Any software  that  use multithreads  sessions to move data around  zfs 
friendly? We do not mind getting going with a commercial solution like camvault 
or veeam if they work.

Thank you for your time

  


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Ergi Thanasko
HI Gea,
Great info, are you seeing 1000MB/s doing iperf or actually transfer rates  
rsync, cp bbcp….


> On Sep 16, 2016, at 12:57 PM, Guenther Alka  wrote:
> 
> I have made some investigations into 10G and found that 300-400MB/s is 
> expected with default settings. Improvements are possible up to 1000MB/s via 
> mtu 9000 and if you increase ip buffers ex
> max_buf=4097152 tcp
> send_buf=2048576 tcp
> recv_buf=2048576 tcp,
> 
> NFS lockd servers (ex 1024), NFS number of threads (ex 64) and NFS transfer 
> size (ex 1048576)
> 
> http://napp-it.org/doc/downloads/performance_smb2.pdf
> 
> 
> Gea
> 
> Am 16.09.2016 um 19:43 schrieb Ergi Thanasko:
>> Hi all,
>> We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
>> and some have 6nic in a link aggregation mode. We been moving a lot of data 
>> around and we are trying to get the maximum performance. I have seen zpool 
>> can deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec 
>> between those two servers.
>> Given  the hardware that we have and the zpool performance,   we expected   
>> to see some serious data transfer rates  however we only see around 
>> 200-300MB/sec average  using rsync or copy paste over NFS.  Standard MTU 
>> 1500 and nfs block size.  I want to ask the community what to do get some 
>> higher throughout and the application level. I hear ZFS send/receive   or 
>> ZFS shadow does work faster but it does snapshots. Out data (Terabytes) is 
>> constantly evolving   and we prefer something in the nature of rsync  but to 
>> utilize the network hardware.
>> 
>> If Anyone has a hardware setup that can see 1GB/sec  throughput  and does 
>> not mind sharing?
>> Any software  that  use multithreads  sessions to move data around  zfs 
>> friendly? We do not mind getting going with a commercial solution like 
>> camvault or veeam if they work.
>> 
>> Thank you for your time
>> 
>>  
>> 
>> ___
>> OmniOS-discuss mailing list
>> OmniOS-discuss@lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Ergi Thanasko
no that is even slower, just rsync over mounted  nfs, a multithreaded rsync 
does work have better performance.

[cid:449A210A-451D-40E1-97EA-98BA5E7FA558@fios-router.home]

On Sep 16, 2016, at 5:24 PM, Dale Ghent 
> wrote:

Are you doing the rsync over ssh? You might want to look into using HPN-SSH:

https://www.psc.edu/index.php/hpn-ssh

/dale

On Sep 16, 2016, at 1:43 PM, Ergi Thanasko  wrote:

Hi all,
We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
and some have 6nic in a link aggregation mode. We been moving a lot of data 
around and we are trying to get the maximum performance. I have seen zpool can 
deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec between 
those two servers.
Given  the hardware that we have and the zpool performance,   we expected   to 
see some serious data transfer rates  however we only see around 200-300MB/sec 
average  using rsync or copy paste over NFS.  Standard MTU 1500 and nfs block 
size.  I want to ask the community what to do get some higher throughout and 
the application level. I hear ZFS send/receive   or ZFS shadow does work faster 
but it does snapshots. Out data (Terabytes) is constantly evolving   and we 
prefer something in the nature of rsync  but to utilize the network hardware.

If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not 
mind sharing?
Any software  that  use multithreads  sessions to move data around  zfs 
friendly? We do not mind getting going with a commercial solution like camvault 
or veeam if they work.

Thank you for your time




___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Dale Ghent
Are you doing the rsync over ssh? You might want to look into using HPN-SSH:

https://www.psc.edu/index.php/hpn-ssh

/dale

> On Sep 16, 2016, at 1:43 PM, Ergi Thanasko  wrote:
> 
> Hi all,
> We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
> and some have 6nic in a link aggregation mode. We been moving a lot of data 
> around and we are trying to get the maximum performance. I have seen zpool 
> can deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec 
> between those two servers. 
> Given  the hardware that we have and the zpool performance,   we expected   
> to see some serious data transfer rates  however we only see around 
> 200-300MB/sec average  using rsync or copy paste over NFS.  Standard MTU 1500 
> and nfs block size.  I want to ask the community what to do get some higher 
> throughout and the application level. I hear ZFS send/receive   or ZFS shadow 
> does work faster but it does snapshots. Out data (Terabytes) is constantly 
> evolving   and we prefer something in the nature of rsync  but to utilize the 
> network hardware. 
> 
> If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not 
> mind sharing? 
> Any software  that  use multithreads  sessions to move data around  zfs 
> friendly? We do not mind getting going with a commercial solution like 
> camvault or veeam if they work.
> 
> Thank you for your time 
> 
> 
> 
> 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Davide Poletto
Hope not be wrong here but Port Trunking usage deserves its part in the
whole picture too: be aware that using Port Trunking (with LACP as per IEEE
802.3ad) between your Servers' NIC and your 10Gb Switching infrastructure -
and this happens by aggregating "n" identical ports together on both link's
ends, as you wrote - doesn't consequently mean that your
"one-Host-to-one-Host" generated traffic will be able to use and will be
able to saturate all those "n" 10Gb based links concurrently...

On Sep 16, 2016 7:45 PM, "Ergi Thanasko"  wrote:

> Hi all,
> We have a a few servers  conected via 10g nic  LACP, some of them have
> 4nic and some have 6nic in a link aggregation mode. We been moving a lot of
> data around and we are trying to get the maximum performance. I have seen
> zpool can deliver  2-3GB accumulated  throughput. Iperf does about
> 600-800MB/sec between those two servers.
> Given  the hardware that we have and the zpool performance,   we expected
>  to see some serious data transfer rates  however we only see around
> 200-300MB/sec average  using rsync or copy paste over NFS.  Standard MTU
> 1500 and nfs block size.  I want to ask the community what to do get some
> higher throughout and the application level. I hear ZFS send/receive   or
> ZFS shadow does work faster but it does snapshots. Out data (Terabytes) is
> constantly evolving   and we prefer something in the nature of rsync  but
> to utilize the network hardware.
>
> If Anyone has a hardware setup that can see 1GB/sec  throughput  and does
> not mind sharing?
> Any software  that  use multithreads  sessions to move data around  zfs
> friendly? We do not mind getting going with a commercial solution like
> camvault or veeam if they work.
>
> Thank you for your time
>
>
>
>
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Bob Friesenhahn

On Fri, 16 Sep 2016, Ergi Thanasko wrote:

We were testing it between two similar servers rsynic and copy paste 
both ways ( read/write) was the same around 300MB/sec average. Of 
course the speed test on the pools provide higher throughput around 
600MB/sec


I am not sure what 'copy paste' means, but rsync can be used in a 
couple of different ways.  One way is that rsync is writing over NFS 
to the server.  Another way is that rsync communicates via some means 
(ssh, netcat) to another rsync running on the server.  Rsync must 
always query/check to see if/what data is existing while it does the 
copy.


It is useful to see that if you do two copies at the same time if the 
combined rate is much more than 300MB/sec average.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Guenther Alka
I have made some investigations into 10G and found that 300-400MB/s is 
expected with default settings. Improvements are possible up to 1000MB/s 
via mtu 9000 and if you increase ip buffers ex

max_buf=4097152 tcp
send_buf=2048576 tcp
recv_buf=2048576 tcp,

NFS lockd servers (ex 1024), NFS number of threads (ex 64) and NFS 
transfer size (ex 1048576)


http://napp-it.org/doc/downloads/performance_smb2.pdf


Gea

Am 16.09.2016 um 19:43 schrieb Ergi Thanasko:

Hi all,
We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
and some have 6nic in a link aggregation mode. We been moving a lot of data 
around and we are trying to get the maximum performance. I have seen zpool can 
deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec between 
those two servers.
Given  the hardware that we have and the zpool performance,   we expected   to 
see some serious data transfer rates  however we only see around 200-300MB/sec 
average  using rsync or copy paste over NFS.  Standard MTU 1500 and nfs block 
size.  I want to ask the community what to do get some higher throughout and 
the application level. I hear ZFS send/receive   or ZFS shadow does work faster 
but it does snapshots. Out data (Terabytes) is constantly evolving   and we 
prefer something in the nature of rsync  but to utilize the network hardware.

If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not 
mind sharing?
Any software  that  use multithreads  sessions to move data around  zfs 
friendly? We do not mind getting going with a commercial solution like camvault 
or veeam if they work.

Thank you for your time

  



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Michael Talbott
Jumbo frames are a major help. Also, try using multiple streams (break single 
rsync job into multiple jobs). Also be sure to use rsync native protocol and 
don't tunnel it over ssh. Then there's bbcp which can push a single copy 
operation into multiple streams to fully saturate your disks/network ;) 
https://www.slac.stanford.edu/~abh/bbcp/


Michael


> On Sep 16, 2016, at 12:38 PM, Ergi Thanasko  wrote:
> 
> Hi Bob,
> We were testing it between two similar servers   rsynic and copy paste  both 
> ways ( read/write) was the same around 300MB/sec average. Of course the speed 
> test on the pools provide  higher throughput  around 600MB/sec  
> 
> 
>> On Sep 16, 2016, at 12:33 PM, Bob Friesenhahn > > wrote:
>> 
>> On Fri, 16 Sep 2016, Ergi Thanasko wrote:
>> 
>>> Given the hardware that we have and the zpool performance, we expected to 
>>> see some serious data transfer rates however we only see around 
>>> 200-300MB/sec average using rsync or copy paste over NFS. Standard MTU 1500 
>>> and nfs block size.  I want to ask the community
>> 
>> Are these read rates or write rates?  Read rates should be able to come 
>> close to pool or wire limits.  Write rates over NFS are primarily dominated 
>> by latency on a per-writer basis.
>> 
>> Increasing the MTU to 9k has been shown to improve throughput quite a lot 
>> for large transfers.
>> 
>> Bob
>> -- 
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us , 
>> http://www.simplesystems.org/users/bfriesen/ 
>> 
>> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ 
>> 
> 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Ergi Thanasko
Hi Bob,
We were testing it between two similar servers   rsynic and copy paste  both 
ways ( read/write) was the same around 300MB/sec average. Of course the speed 
test on the pools provide  higher throughput  around 600MB/sec
[cid:449A210A-451D-40E1-97EA-98BA5E7FA558@fios-router.home]

On Sep 16, 2016, at 12:33 PM, Bob Friesenhahn 
> wrote:

On Fri, 16 Sep 2016, Ergi Thanasko wrote:

Given the hardware that we have and the zpool performance, we expected to see 
some serious data transfer rates however we only see around 200-300MB/sec 
average using rsync or copy paste over NFS. Standard MTU 1500 and nfs block 
size.  I want to ask the community

Are these read rates or write rates?  Read rates should be able to come close 
to pool or wire limits.  Write rates over NFS are primarily dominated by 
latency on a per-writer basis.

Increasing the MTU to 9k has been shown to improve throughput quite a lot for 
large transfers.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, 
http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Bob Friesenhahn

On Fri, 16 Sep 2016, Ergi Thanasko wrote:

Given the hardware that we have and the zpool performance, we 
expected to see some serious data transfer rates however we only see 
around 200-300MB/sec average using rsync or copy paste over NFS. 
Standard MTU 1500 and nfs block size.  I want to ask the community


Are these read rates or write rates?  Read rates should be able to 
come close to pool or wire limits.  Write rates over NFS are primarily 
dominated by latency on a per-writer basis.


Increasing the MTU to 9k has been shown to improve throughput quite a 
lot for large transfers.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Network throughout 1GB/sec

2016-09-16 Thread Ergi Thanasko
Hi all,
We have a a few servers  conected via 10g nic  LACP, some of them have  4nic 
and some have 6nic in a link aggregation mode. We been moving a lot of data 
around and we are trying to get the maximum performance. I have seen zpool can 
deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec between 
those two servers. 
Given  the hardware that we have and the zpool performance,   we expected   to 
see some serious data transfer rates  however we only see around 200-300MB/sec 
average  using rsync or copy paste over NFS.  Standard MTU 1500 and nfs block 
size.  I want to ask the community what to do get some higher throughout and 
the application level. I hear ZFS send/receive   or ZFS shadow does work faster 
but it does snapshots. Out data (Terabytes) is constantly evolving   and we 
prefer something in the nature of rsync  but to utilize the network hardware. 

If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not 
mind sharing? 
Any software  that  use multithreads  sessions to move data around  zfs 
friendly? We do not mind getting going with a commercial solution like camvault 
or veeam if they work.

Thank you for your time 

 


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss