Inappropriate?  It makes BOINC to better do what I ask of it.  If that's
inappropriate so be it.  Real manual controls would be better.  For instance
my quads run better with no more than 3 instances of NFS.  They run better
with only 1 instance of Lattice at a time.  Other less demanding projects
can be run on the other cores however.  I should be able to limit the number
of concurrent instances of any project but there's no way to do this in
BOINC.  Resetting debts helps alleviate the problem a bit but I agree it's a
poor solution, one that we've been forced into.

Regards/Ed


On Thu, Feb 11, 2010 at 8:01 AM, <[email protected]> wrote:

> Overriding the debts means that the resource shares that you set are
> meaningless.  Constantly resetting the debts is in my opinion a fairly
> inappropriate response.
>
> jm7
>
>
>             Ed A
>             <canoebey...@gmai
>             l.com>                                                     To
>                                       [email protected]
>              02/10/2010 05:07                                           cc
>              PM
>                                                                   Subject
>                                       Re: [boinc_dev] 6.10.32 failing to
>                                       maintain sufficient work
>
>
> Hi John,
>
> I keep this in my cc_config.xml at all times:
>
> <zero_debts>1</zero_debts>
>
> Maybe that's why I didn't have the initial weirdness.  I restart the
> clients or reboot every couple of days to reset them.  In my experience
> things run far better at least for my use when the debt system is disabled
> as much as possible.  Here's a suggestion for a new commandline switch:
>
> <disable_debts>1</disable_debts>
>
> What do you think?
>
> Regards/Ed
>
>
> On Wed, Feb 10, 2010 at 2:12 PM, <[email protected]> wrote:
>  It is no particular project.  It does appear to be recovering - which
>  leads
>  me to speculate that one of the time_stats numbers was out of whack
>  somehow.  The item of note is that the machine spent the last month and a
>  half doing an Aqua task that took both processors, and I am currently
>  wondering if that caused the problem somehow.
>
>  The shortfall was 0 in the logs when it should have been around a half
>  day
>  for one CPU.  The other CPU is effectively taken up by a CPDN task that
>  still has about 5 days of run time left.
>
>  jm7
>
>              Ed A
>              <canoebey...@gmai
>              l.com>                                                     To
>                                        [email protected]
>              02/10/2010 02:00                                           cc
>              PM
>
>
>  I've been using v6.10.32 on 10 machines since it came out and see none of
>  this behavior.  In fact the scheduling seems better than in previous
>  versions, especially for GPU projects.  Increasing the queue immediately
>  causes BOINC to DL more work.  I've tested up to queue sizes of 1 day on
>  the CPU projects SIMAP  and NFS and it's working as expected here.  I
>  have
>  noticed that if I set a project (on a quad) at 100 and another at 300 it
>  more reliably runs 1 instance of the 100 level project and 3 instances of
>  the 300 level project.  This is VERY good, as some projects take up too
>  many resources to be run on all 4 cores.  Earlier BOINC versions seemed
>  far
>  less predicable and it was a big problem (thus all the requests for
>  manual
>  controls).  I would like to give the BOINC development team big kudos for
>  instituting true backup projects in this version.  Much needed, although
>  haven't found a project yet with the server software upgrade needed to
>  test
>  the feature.  Looking forward to it patiently.  Is it a particular
>  project
>  that you're having problems with?
>
>  Regards/Ed
>
>
>  On Wed, Feb 10, 2010 at 7:58 AM, <[email protected]> wrote:
>
>   I have set BOINC to know that I will be disconnected for 0.7 days, yet
>  it
>   stops trying to fetch work after 0.2 days of work is downloaded.  Some
>   recent change to work fetch has changed the policy on how much work
>   should
>   be kept on a machine.
>
>   If there is insufficient work for each CPU to last through the
>   disconnected
>   time, there is not enough work on the machine.
>
>   I sent logs with my last message about this, and have not seen any
>   message
>   in the email list.
>
>   jm7
>
>   _______________________________________________
>   boinc_dev mailing list
>   [email protected]
>   http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>   To unsubscribe, visit the above URL and
>   (near bottom of page) enter your email address.
>
>
>
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to