On Apr 26, 2009, at 5:10 AM, Rattledagger wrote:

> On Sat, 25 Apr 2009 18:13:20 -0700, you wrote:
>>
>> The reason I keep questioning the download work check deadlines is
>> that why might it be fatal if we don't check for even as long as an
>> hour?
>>
>> This is the part I really don't get, what is changing so fast we have
>> to check every 60 seconds.  I just cannot see it.  At TSI (default 60
>> minutes) or tasks end, sure, no problem ...
>>
>> Other than that, I am at a loss as to why one hour one way or  
>> another,
>> one download or another, is so critical to make sudden changes.
>>
>> If work fetch is working correctly, we should not be that
>> overburdened.  If that is what we are protecting against, then the
>> Work Fetch is broken because we are getting too much work in an
>> unsustainable cycle.
>>
>
> At the time a task was assigned to you it doesn't need to be in
> deadline-trouble, but downloading the files neccessary to run the  
> task isn't
> instantaneous. In case of download-problems it can take many hours,  
> and even
> days, before all files are downloaded, and by the time the files is  
> downloaded
> the task can be in deadline-trouble.
>
> Also, in case a project "needs" really fast turnaround-times, having  
> the ability
> to start crunching immediately after the files is downloaded is  
> needed.
>
> So keeping the checks on file-downloads is needed, and this  
> shouldn't be a big
> problem.
>
> But I don't see any good reasons for re-calculating every minute or  
> something,
> once per hour should be adequate.

One or the other.

Either we need to perform this cycle based on events, or on time.

If events, you get into the situation where on faster system we are  
doing this process every ten seconds or faster as the systems get  
faster at doing things.

My i7 isn't even the fastest or widest.  If I had a 965 it would be  
20% faster ... and if I had one more PCI-e slot I would have two more  
GPUs... all turning and burning meaning that I would be completing  
more tasks and would be likely running this cycle as fast as 6-8  
seconds per ... what is it going to be like with 16 core systems with  
6 GPUs or more) ...

And I am even trying to help by running GPU Grid and not SaH, CPDN and  
YoYo and lots of other projects with long running tasks ...

When GPU Grid went down for a day I did SaH Beta and I think I did  
about 2K tasks in about 18 hours and the only reason I did not run up  
the score more is that I ran the VLAR tasks which were quite slow ...

Anyway, you argue for the event based, then time based then both ...  
one or the other.

:)


Of course this is also why I suggested tracking the time lapse between  
task completions ... if we do need some alternatives this is a good  
discriminator as to which processing model makes sense ...

Slower systems will have a longer interval between task  
completions ... for that perhaps we should use a different queuing  
model ...
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to