My guess on the 1 GB laptop is that it is thrashing because it doesn't have
enough memory.  That would cause a lot of disk i/o, and could account for
the behavior you're describing.  On cpu-bound tasks, such as what you're
running on boinc, if you exceed physical memory and the OS starts swapping
to disk, it's going to be writing to the swap file **continuously**.

Furthermore, I know Linux has a "swapiness" setting whereby it will write
stuff out to the swap file *before* it needs to, in order to make it easier
for it to potentially make room for something else later.  Therefore, even
if you haven't actually run out of memory, conceivably it could be still be
continuously writing stuff to the swap file, but never reading anything
back in.  I don't know if Windows is as aggressive with regards to
preemptively write stuff out to the swap file.

I would try running just one instance at a time by setting the "use xx% of
the cpus" to whatever value makes boinc run only one task at a time, i.e.,
25% on a quad core.  If this makes the i/o drop to negligible values, then
the problem is you're running too many tasks in too little memory.

Mike

On Tue, Jun 18, 2013 at 3:03 AM, "Steffen Möller" <[email protected]>wrote:

>
> > Gesendet: Montag, 17. Juni 2013 um 21:29 Uhr
> > Von: "David Anderson" <[email protected]>
> > An: [email protected]
> > Betreff: Re: [boinc_dev] Can we do shared memory with no disk usage?
> >
> > In situations were BOINC is causing unexpectedly large
> > (> 1 GB/hour) disk I/O,
> > we need to figure out the source of the I/O.
> > -- David
>
>
> My little centrino laptop had SETI (official client) run over night.
>
> $ iostat -h -m
> Linux 3.8-1-rt-amd64 (Toshiba)  06/18/2013      _x86_64_        (2 CPU)
> ...
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                  33.80         0.17         0.39       3040       6859
> $ date
> Tue Jun 18 00:30:06 CEST 2013
> $ iostat -h -m
> ...
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                  65.32         0.07         0.98       3387      45737
> $ date
> Tue Jun 18 08:36:50 CEST 2013
>
> Looking only at the last column, in those 8 hours this were
>
> > (45737-6859)/1024
> [1] 37.9668
>
> GB and consequently
>
> > (45737-6859)/1024/8
> [1] 4.74585
>
> GB/h
>
>
> another machine, running about 10 clients in parallel (all Rosetta, one
> WCG),
> had a bit less of IO ... here iostat was run twice with 1h interim sleep
>
> twin1a:~ $ date ; iostat -m -h
> Mon Jun 17 23:34:12 CEST 2013
> Linux 3.2.0-3-amd64 (twin1a)    06/17/2013      _x86_64_        (24 CPU)
>
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                   3.08         0.02         0.30      29091     504228
>
> $ sleep 3600 ; date ; iostat -m -h
> Tue Jun 18 00:34:26 CEST 2013
> ..
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                   3.09         0.02         0.30      29091     506276
>
>
> So this is about 2 GB per hour written and nothing read ????
>
> And yet another machine, same hardware, mostly einstein with 1 Rosetta and
> 1 WCG
>
> twin1b:~ $ date; iostat -h -m ; sleep 3600 ; date ; iostat -h -m
> Mon Jun 17 23:58:16 CEST 2013
> ...
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                   1.67         0.00         0.12      75848    2377118
>
>
> Tue Jun 18 00:58:16 CEST 2013
>
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda
>                   1.67         0.00         0.12      75848    2379914
>
>
> Which means another 2 GB per hour.
>
> The machines were not running anything else but BOINC, the laptop had
> firefox open in the background.
> No BOINC graphical clients run anywhere.
>
> iostat comes with the sysstat package, for anyone out there to try.
>
> The laptop only has 1G of mem, which may lead to some swapping and account
> for some of the IO. Still,
> to me it looks mostly like some process writing a lot of status
> information that is not read by anyone.
> The missing reads for the big machines I might want to explain by large IO
> buffers ... there certainly
> have been a couple of uploads that should have caused some read, otherwise.
>
> Cheers,
>
> Steffen
>
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to