Gino Ruopolo writes:
 > Thank you Bill for your clear description.
 > 
 > Now I have to find a way to justify myself with my head office that
 > after spending  100k+ in hw and migrating to "the most advanced OS" we
 > are running about 8 time slower :) 
 > 
 > Anyway I have a problem much more serious than rsync process speed. I
 > hope you'll help me solving it out! 
 > 
 > Our situation:
 > 
 > /data/a
 > /data/b
 > /data/zones/ZONEX    (whole root zone)
 > 
 > As you know I have a process running "rsync -ax /data/a/* /data/b" for
 > about 14hrs. 
 > The problem is that, while that rsync process is running, ZONEX is
 > completely unusable because of the rsync I/O load. 
 > Even if we're using FSS, Solaris seems unable to give a small amount
 > of I/O resource to ZONEX's activity ... 
 > 
 > I know that FSS  doesn't deal with I/O but I think Solaris should be
 > smarter .. 
 > 
 > To draw a comparison, FreeBSD Jail doesn't suffer from this problem
 > ... 
 > 
 > thank,
 > Gino
 >  
 >  
 > This message posted from opensolaris.org
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Under a streaming write load, we kind of overwhelm the devices
and reads are few and far between. To alleviate this we
somewhat need to throttle writers more 

        6429205 each zpool needs to monitor it's  throughput and throttle heavy 
writers

This is in a  state of "fix in progress".  At the same time,
the notion    of    reserved slots   for   reads is    being
investigated. That should do wonders  to your issue. I don't
know how to  workaround  this for now (appart  from starving
rsync process of cpu access).


-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to