Hello again,

On Jan 10, 2010, at 5:39 AM, bank kus wrote:

> Hi Henrik
> I have 16GB Ram on my system on a lesser RAM system dd does cause problems as 
> I mentioned above. My __guess__ dd is probably sitting in some in memory 
> cache since du -sh doesnt show the full file size until I do a sync.
> 
> At this point I m less looking for QA type repro questions and/or 
> speculations rather looking for  ZFS design expectations. 
> 
> What is the expected behaviour, if one thread queues 100 reads  and another 
> thread comes later with 50 reads are these 50 reads __guaranteed__ to fall 
> behind the first 100 or is timeslice/fairshre done between two streams? 
> 
> Btw this problem is pretty serious with 3 users using the system one of them 
> initiating a large copy grinds the other 2 to a halt. Linux doesnt have this 
> problem and this is almost a switch O/S moment for us unfortunately :-(

Have you reproduced the problem without using /dev/urandom? I can only get this 
behavior when using dd from urandom, not using files with cp, and not even 
files with dd. This could then be related the random driver spending kernel 
time in high priority threads.

So while I agree that this is not optimal, there is a huge difference in how 
bad it is, if it's urandom generated there is no problem with copying files. 
Since you also found that it's not related to ZFS (also tmpfs, and perhaps only 
urandom?) we are on the wrong list. Please isolate the problem, can we put 
aside any filesystem, if so we are on the wrong list, i've added perf-discuss 
also.

Regards

Henrik
http://sparcv9.blogspot.com


Henrik
http://sparcv9.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to