[email protected] wrote:
> Note that scheduling and enforcement are split. They do not always run at > the same time. Scheduling consists of finding the preferred set of tasks > that should be running now. Enforcement consists of comparing the > currently running set of tasks with the preferred set of tasks and possibly > changing the set of running tasks. Yes, but my view is: "Why do we need it more complicated than it needs to be?" > I really don't understand the reasoning of schedule(job) or reschedule. > There can never be a perfect understanding of what just changed in the > system because one of the things that changes all of the time is the > estimated remaining runtime of tasks, and this is one of the items that > needs to drive the calculation of what is going to miss deadline. What is > going to miss deadline depends on all of the other tasks on the host, and a > single task cannot be isolated from the rest for this test. Well, I am seeing a difference between first-time scheduling a job (in my case putting it into the queue at a fitting position) and rescheduling (ie. moving a TSI Job from the cpu to the 'not running' department, and refitting that free slot with work etc.) And yes, estimated runtime changes all the time, but do we need to care about that every second it reports back? Why not simply look at it when it comes to (re)scheduling events driven by the below stated points? > jm7 > > Jonathan Hoser <[email protected]> wrote on 04/28/2009 10:25:02 > AM: > > >> In my eyes this amounts to the following >> >> checkpointing: >> if task.runtime < TSI? >> do nothing >> else >> halt job >> do rescheduling (job) >> > > Does not capture the fact that some tasks may need to start before the task > they are pre-empting has completed its time slice. Also does not seem to > have multi CPU resources in mind. > 1. either we have time-slicing or we don't. If we really got a job with a deadline so close that waiting till the end of the current timeslice (with more cpu-cores a more regular event) will really mean its ultimate failure, then there's something wrong that we needn't fix, it shouldn't be the clients aim to fix shortcomings of supplying projects. 2. yes it does have multiple cpus in mind. Or do you want to tell me, that every app is asked at the same time (over multiple cores) to checkpoint/ does checkpoints in complete synchronisation with all other running apps? I think not. Thus this event will likely be triggered more often than the others, but will actually only do something if the timeslice /TSI of THAT app on THAT core is up. >> task complete: >> if task.complete >> do rescheduling >> >> download complete: >> do scheduling >> >> project/task resume/suspend >> do rescheduling >> >> maybe (for sake of completeness): >> RPC complete: >> if server asks to drop WU >> halt job; >> do rescheduling (job) >> >> The trickier part of course are the >> scheduling /rescheduling calls, and I'm currently leafing through my >> notepad looking for the sketch... >> for my idea we'd need >> >> list of jobs run (wct, project) >> -> containing the wall-clock times for every job run during the last24 >> > hours. > > How does the last 24 hours help? Some tasks can run for days or weeks. > Elongate it to a fitting period of time. 24h is an idea I picked up from Paul 'to keep the mix interessting' - an idea I like. So, if a task is running 24h 7days ... we needn't have running a second, unless this is our ultimate high-priority project with a priority/resource share of 1000:1 or so. >> per resource (CPU/ GPU Type1 / GPU Type2 / Coproc) two(2) linked >> list of jobs eligible to run on that resource >> -> linked list1 (CPU) (containing all jobs in order of getting them) >> -> linked list2 (CPU) (short, only working'state) >> -> linked list1 (GPU) >> -> linked list2 (GPU) >> -> linked list (...) >> > > Those already exist (sort of). There is a vector of all jobs, and a vector > of all jobs that are started and not yet complete. It is not broken out by > resource type, but the resource type is included in the data. Current > methodology is to skip over the tasks you don't want in the current test. > Note: I would have done it differently than the current implementation. > Thought so, it seems the most practical way to doing things. >> reschedule (job) >> if reason: task/project suspend >> mark job(s) as suspended in linkedlist1(resourcetype) >> addlast to linkedlist2(resourcetype) >> >> if reason: task/project resume >> mark job(s) as runable in linkedlist1(resourcetype) >> if job(s) contained in linkedlist2(resourcetype) >> order runnable jobs in linkedlist2(resourcetype) by order >> in linkedlist1(resourcetype) (or EDF) >> >> if reason: drop WU >> do cleanup >> do reschedule reason 'task complete' >> >> if reason: task complete >> do cleanup >> check (via wct-job-list and resourceshare/project priority) >> what project shall get resource now. >> traverse linkedlist2(resourcetype) until end or eligible job found >> if job is found, launch it now. >> else >> traverse linkedlist1(resourcetype) until eligible job found or end >> if job is found, launch it now. >> else >> redo the above check and choose second/third/...-highest scoring >> > project > I thought linked link list 2 only included tasks that have been started. > You need a list of tasks that would ideally be running now. > Nope, because this list is implicitly given by the fifo-ordering mixed with 'queue-jumping' of soon-to-meet-the-deadline-jobs AND the resource-share selector. >> if reason: checkpointing / TSI >> addfirst (job) to linkedlist2(resourcetype) >> do reschedule reason 'task complete' >> > But the task isn't complete. > Yes, but the logic to apply would be the same. Code reusal? > What about multiple reasons all at the same time? Why do we need to know > the reason in the first place? > Hm, some ordering could be chosen, I'll think about it; and the reason does have its place: not all events do have the same event, do they? >> schedule(job) >> if new job has Deadline earlier than >> now + (sum of estimatedjobruntime of jobs in >> > linkedlist1(resourcetype) > >> divided by resources available (e.g. CPUs)) >> then insert job in linkedlist1(resourcetype) so that >> deadline will be met. >> >> sofar... my idea in pseudocode... >> the second (temporary) list could be omitted if having certain flags >> for the jobs in the first list - >> but adds some clarity for the first draft. >> The 'do cleanup' thingy would also need to add the runtime for the >> rescheduled job to the list of wallclocktimes; >> other than that... I hope I could outline my idea in a clear fashion; >> >> Best -Jonathan _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
