nickva commented on a change in pull request #409: RFC for CouchDB background workers URL: https://github.com/apache/couchdb-documentation/pull/409#discussion_r281642643
########## File path: rfcs/007-background-workers.md ########## @@ -0,0 +1,252 @@ +--- +name: Formal RFC +about: Submit a formal Request For Comments for consideration by the team. +title: 'Background workers with FoundationDB backend' +labels: rfc, discussion +assignees: '' + +--- + +[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ ) + +# Introduction + +This document describes a data model and behavior of CouchDB background workers. + +## Abstract + +CouchDB background workers are used for things like index building and +replication. We present a generalized model that allows creation, running, and +monitoring of these jobs. "Jobs" are represented generically such that both +replication and indexing could take advantage of the same framework. The basic +idea is that of a global job queue for each job type. New jobs are inserted +into the jobs table and enqueued for execution. + +There are a number of workers that attempt to dequeue pending jobs and run +them. "Running" is specific to each job type and would be different for +replication and indexing. Workers are processes which execute jobs. The MAY be +individual Erlang processes, but could also be implemented in Python, Java or +any other environment with a FoundationDB client. The only coordination between +workers happens via the database. Workers can start and stop at any time. +Workers monitor each other for liveliness and in case some workers abruptly +terminate, all the jobs of a dead worker are re-enqueued into the global +pending queue. + +## Requirements Language + +[NOTE]: # ( Do not alter the section below. Follow its instructions. ) + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", +"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this +document are to be interpreted as described in +[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt). + +## Terminology + +--- + +`Job`: An unit of work, identified by a `JobId` and also having a `JobType`. + +`Job table`: A subspace holding the list of jobs indexed by `JobId`. + +`Pending queue`: A queue of jobs which are ready to run. Workers may pick jobs +from this queue and start running them. + +`Active jobs`: A subspace holding the list of jobs currently run by a +particular worker. + +`Worker`: A job execution unit. Workers could be individual processes or groups +of processes running on remote nodes. + +`Health key` : A key that the worker periodically updates with a timestamp to +indicate that they are "alive" and ready to process jobs. + +`Versionstamp`: a 12 byte, unique, monotonically (but not sequentially) +increasing value for each committed transaction. + +--- + +# Detailed Description + +## Data Model + +The main job table: + - `("couch_workers", "jobs", JobType, JobId) = (JobState, WorkerId, Priority, CancelReq, JobInfo, JobOps)` + +Pending queue: + - `("couch_workers", "pending", JobType, Priority, JobId) = ""` + +Active queue: + - `("couch_workers", "active", WorkerType, Worker, JobId) = (JobState, JobInfo, JobOpts)` + +Worker registration and health: + - `("couch_workers", WorkerType, "workers_vs") = Versionstamp` + - `("couch_workers", WorkerType, "workers", WorkerId) = WorkerOpts` + - `("couch_workers", WorkerType, "health", WorkerId) = (Versionstamp, Timestamp, WorkerTimeout)` + +### Data model fields + +- `JobType`: The job type. It MAY be ``replication`` or ``indexing``, however, + specific types are not in scope of this document. + +- `JobId` : MUST unique identify a job in a cluster. It MAY be a `UUID` or + derived from some other job parameters. + +- `JobState` : MUST be one of `pending`, `running`, or `completed` + +- `Priority` : MAY be any key which allows sorting jobs, such as a timestamp, + or a tag. + +- `WorkerId` : MUST uniquely identify a worker in a cluster. It MAY be + generated each time the worker is restarted, or it could be persisted across + worker restarts, as long as it is unique across the whole cluster. + +- `CancelReq` : A boolean flag indicating a request to the worker to cancel the + job. + +- `JobOps` : Type-specific job options. Represented as a JSON object. This MAY + contain fields like `"source"`, `"target"`, `"dbname"` etc. + +- `JobInfo` : A per-job info object. Represented as JSON. This object will + contain details about a job's current state. It MAY have fields such as + "update_seq", "error_count", etc. + + +## Job Lifecyle + +New jobs are posted to the main jobs table along with an entry in the pending +queue. Priority key assignment is type specific. `WorkerId` is set to `nil` +initially. + +Workers will monitor the pending queue with their matching `JobType`. That is, +`"replication"` workers will monitor the `"replication"` pending queue, +indexing workers the `"indexing"` queue, etc. If there are jobs ready to run, +they will attempt to grab one or more jobs from the pending queue and assign it +to themselves for execution. They do that by setting the `WorkerId` in the jobs +table entry to their worker ID, removing the job from pending queue and create +a new entry in their own `"active"` jobs area. + +When a job finishes running, either because it was a successful completion or a +terminal failure, the worker MUST remove the job from its active queue and +clear its `WorkerId` field. Then, based on type-specific behavior, it MUST do +either one of: + + - Update the `JobState` to `"completed"` and leave the job in the jobs table. + It would be up to the job creation logic to inspect and remove it. + - Delete the job from the system altogether. This may be useful when running + _replicate jobs, for example. + + +If a user decided to cancel a job that is running, they MUST toggle `CancelReq` +to `true`. Then they MUST wait for the job to stop running. If the job is not +running, they may directly remove the job from the pending and the main jobs +table. Review comment: Exactly. Yeah, in case of replications something like that might be needed and we might preserve some of the previous scheduler features, like you pointed out: ``` If there is a pending backlog: Stop `$max_churn` number of continuous jobs put them back on the queue Pick up `$max_churn` number of jobs from the queue and run those ``` Good idea, I'll add this to the RFC. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
