Hi Lee:

I've been experiencing something very similar. I recently (several
months ago) moved a ~30T pool from a "old" OpenIndiana 151a9 system ,
where it had been working flawlessly for several years, to a "new"
OmniOSce 151022 installation (zpool export on old, zpool import on new).


Now, I have extremely poor NFS write speeds on the new system. I've even
swapped the cards (LSI SAS, 10G Ethernet) from the OI system to the
OmniOS system, to eliminate some hardware discrepancies, but this had no
effect whatsoever. Its not a network problem. I can happily get near
line-rate on the 10G network between the server and various 10G
connected hosts. Its not a ZIL/L2ARC problem either, removing them
(they're on SSDs, as yours) had minimal effect.


The new hardware is signifcantly more performant, with nearly 10x more
memory (240G vs 32G), more cores and faster CPUs; I never expected
performance to get worse.


I'm not convinced its a "pure" NFS problem either, as I've noticed some
other strange performance degradation on the new system. The pool used
to take somewhere between 40 - 60 hours to run a scrub on the OI system.
Recent scrubs were taking 400+ hours. After a recent pkg update and
reboot, the last scrub took ~159 hours. During the scrub, I noticed that
the scanning speed, while starting out relatively fast, pretty much
monotonically decreased in speed as time when on, going from 50 M/s near
the beginning to 17M/s at the end. I have to see what happens at the
next monthly scrub of the pool.


Have you looked at your scrub performance ?

What else is different between the 2 machines ?



_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to