On Wed, Jun 23, 2010 at 10:39:33AM +0200, Guido Trotter wrote:
> On Wed, Jun 23, 2010 at 10:30 AM, Iustin Pop <[email protected]> wrote:
> > On Wed, Jun 23, 2010 at 10:27:45AM +0200, Guido Trotter wrote:
> >> We move from querying the in-memory version to loading all jobs from the
> >> disk. Since the jobs are written/deleted on disk in an atomic manner, we
> >> don't need to lock at all. Also, since we're just looking at the
> >> contents of a directory, we don't need to check that the job queue is
> >> "open".
> >>
> >> If some jobs are removed between when we listed them and us loading
> >> them, we need to be able to cope: if we were asked to load those jobs
> >> specifically, we must report the failure, but if we were just asked to
> >> "load all" we shall just not consider them as part of the "all" set,
> >> since they were deleted.
> >>
> >> Signed-off-by: Guido Trotter <[email protected]>
> >> ---
> >>  lib/jqueue.py |   35 +++++++++++------------------------
> >>  1 files changed, 11 insertions(+), 24 deletions(-)
> >>
> >> diff --git a/lib/jqueue.py b/lib/jqueue.py
> >> index d8249df..1165ad4 100644
> >> --- a/lib/jqueue.py
> >> +++ b/lib/jqueue.py
> >> @@ -1031,21 +1031,6 @@ class JobQueue(object):
> >>
> >>      return job
> >>
> >> -  def _GetJobsUnlocked(self, job_ids):
> >> -    """Return a list of jobs based on their IDs.
> >> -
> >> -   �...@type job_ids: list
> >> -   �...@param job_ids: either an empty list (meaning all jobs),
> >> -        or a list of job IDs
> >> -   �...@rtype: list
> >> -   �...@return: the list of job objects
> >> -
> >> -    """
> >> -    if not job_ids:
> >> -      job_ids = self._GetJobIDsUnlocked()
> >> -
> >> -    return [self._LoadJobUnlocked(job_id) for job_id in job_ids]
> >> -
> >>    def SafeLoadJobFromDisk(self, job_id):
> >>      """Load the given job file from disk.
> >>
> >> @@ -1426,14 +1411,9 @@ class JobQueue(object):
> >>
> >>      return (archived_count, len(all_job_ids) - last_touched)
> >>
> >> - �[email protected]
> >> - �...@_requireopenqueue
> >>    def QueryJobs(self, job_ids, fields):
> >>      """Returns a list of jobs in queue.
> >>
> >> -    This is a wrapper of L{_GetJobsUnlocked}, which actually does the
> >> -    processing for each job.
> >> -
> >>     �...@type job_ids: list
> >>     �...@param job_ids: sequence of job identifiers or None for all
> >>     �...@type fields: list
> >> @@ -1444,12 +1424,19 @@ class JobQueue(object):
> >>
> >>      """
> >>      jobs = []
> >> +    list_all = False
> >> +    if not job_ids:
> >> +      # Since files are added to/removed from the queue atomically, 
> >> there's no
> >> +      # risk of getting the job ids in an inconsistent state.
> >> +      job_ids = self._GetJobIDsUnlocked()
> >> +      list_all = True
> >>
> >> -    for job in self._GetJobsUnlocked(job_ids):
> >> -      if job is None:
> >> -        jobs.append(None)
> >> -      else:
> >> +    for job_id in job_ids:
> >> +      job = self.SafeLoadJobFromDisk(job_id)
> >> +      if job is not None:
> >>          jobs.append(job.GetInfo(fields))
> >> +      elif not list_all:
> >> +        jobs.append(None)
> >>
> >>      return jobs
> >
> > I still don't see how you report the failure here… you're just not
> > adding 'None', i.e. making the list smaller, which will probably break
> > the clients.
> >
> 
> No, it's the other way around. Adding "None" breaks the client.
> (counteintuitive, I know).
> Currently the client expects a list of (any number of) non-broken
> jobs, if it wanted all jobs, while it expects a list of the same size
> of the request if it wanted certain specific jobs. If it gets a None
> and it didn't ask for specific job numbers it stack-traces, while of
> course it wants the None if it wanted a certain job and it's not
> found. So in the current version we're not reporting the error, but in
> the future we should fix the client and server to do better error
> reporting throughout.

Ok, I didn't think this through. If you say it works, LGTM :)

iustin

Reply via email to