The mbuffer on the recieving size does fill up, but at that point it's 
sustaining 270MB/sec through put which I'm happy with.

 What I'm unhappy about is that mbuffer bombs before the proccess is completed 
with a broken pipe error shown on both sides. Several folks have indicated that 
they have not experianced these issues with mbuffer, so I was wondering if it 
might have something to do with my 10Gbit cards.

Both sides are 10x256GB SSDs in a raidz2 vdev.

testing with bonnie++ showed that the pools can write at over 450MB sec.

I think the bottleneck when using mbuffer may be in the pipe process used to 
moved data from mbuffer to zfs receive.

Richard J.
-- 
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
networking-discuss@opensolaris.org

Reply via email to