http://lwn.net/Articles/23634/

Driver porting: the workqueue interface.

This article is part of the LWN Porting Drivers to 2.6 series.
The longstanding task queue interface was removed in 2.5.41; in its place is a new "workqueue" mechanism. Workqueues are very similar to task queues, but there are some important differences. Among other things, each workqueue has one or more dedicated worker threads (one per CPU, by default) associated with it. So all tasks running out of workqueues have a process context, and can thus sleep. Note that access to user space is not possible from code running out of a workqueue; there simply is no user space to access. Drivers can create their own work queues - with their own worker threads - but there is a default queue (for each processor) provided by the kernel that will work in most situations.

Workqueues are created with one of:

    struct workqueue_struct *create_workqueue(const char *name);
    struct workqueue_struct *create_singlethread_workqueue(const char *name);
A workqueue created with create_workqueue() will have one worker thread for each CPU on the system; create_singlethread_workqueue(), instead, creates a workqueue with a single worker process. The name of the queue is limited to ten characters; it is only used for generating the "command" for the kernel thread(s) (which can be seen in ps or top).

Tasks to be run out of a workqueue need to be packaged in a struct work_struct structure. This structure may be declared and initialized at compile time as follows:

    DECLARE_WORK(name, void (*function)(void *), void *data);
Here, name is the name of the resulting work_struct structure, function is the function to call to execute the work, and data is a pointer to pass to that function.

To set up a work_struct structure at run time, instead, use the following two macros:

    INIT_WORK(struct work_struct *work, 
              void (*function)(void *), void *data);
    PREPARE_WORK(struct work_struct *work, 
                 void (*function)(void *), void *data);
The difference between the two is that INIT_WORK initializes the linked list pointers within the work_struct structure, while PREPARE_WORK changes only the function and data pointers. INIT_WORK must be used at least once before queueing the work_struct structure, but should not be used if the work_struct might already be in a workqueue.

Actually queueing a job to be executed is simple:

    int queue_work(struct workqueue_struct *queue, 
                   struct work_struct *work);
    int queue_delayed_work(struct workqueue_struct *queue, 
	                   struct work_struct *work,
                           unsigned long delay);
The second form of the call ensures that a minimum delay (in jiffies) passes before the work is actually executed. The return value from both functions is nonzero if the work_struct was actually added to queue (otherwise, it may have already been there and will not be added a second time).

Entries in workqueues are executed at some undefined time in the future, when the associated worker thread is scheduled to run (and after the delay period, if any, has passed). If it is necessary to cancel a delayed task, you can do so with:

    int cancel_delayed_work(struct work_struct *work);
Note that this workqueue entry could actually be executing when cancel_delayed_work() returns; all this function will do is keep it from starting after the call.

To ensure that none of your workqueue entries are running, call:

    void flush_workqueue(struct workqueue_struct *queue);
This would be a good thing to do, for example, in a device driver shutdown routine. Note that if the queue contains work with long delays this call could take a long time to complete. This function will not (as of 2.5.68) wait for any work entries submitted after the call was first made; you should ensure that, for example, any outstanding work queue entries will not resubmit themselves. You should also cancel any delayed entries (with cancel_delayed_work()) first if need be.

Work queues can be destroyed with:

    void destroy_workqueue(struct workqueue_struct *queue);
This operation will flush the queue, then delete it.

Finally, for tasks that do not justify their own workqueue, a "default" work queue (called "events") is defined. work_struct structures can be added to this queue with:

    int schedule_work(struct work_struct *work);
    int schedule_delayed_work(struct work_struct *work, unsigned long delay);
Most users of workqueues can probably use the predefined queue, but one should bear in mind that it is a shared resource. Long delays in the worker function will slow down other users of the queue, and should be avoided. There is a flush_scheduled_work() function which will wait for everything on this queue to be executed. If your module uses the default queue, it should almost certainly call flush_scheduled_work() before allowing itself to be unloaded.

One final note: schedule_work(), schedule_delayed_work() and flush_scheduled_work() are exported to any modules which wish to use them. The other functions (for working with separate workqueues) are exported to GPL-licensed modules only.


(Log in to post comments)

Driver porting: the workqueue interface.

Posted Jun 17, 2003 21:45 UTC (Tue) by btl (guest, #12097) [Link]

I'm trying to playing with workqueues... i've trouble on compiling
a little kernel module.

What include files do i need beside linux/workqueue.h?

I can't find where struct workqueue_struct is defined ;(

Driver porting: the workqueue interface.

Posted Dec 9, 2003 19:17 UTC (Tue) by bighead (guest, #17582) [Link]

Its work_struct, not workqueue_struct (if I get what you are asking).

Second you might need the usual, linux/init.h, linux/module.h etc header files. Take a look at http://www.xml.com/ldd/chapter/book/. The book is mainly 2.4 based. You can then see http://lwn.net/Articles/driver-porting/ for differences in 2.4 and 2.6.

Cheers!
Archit

Correction

Posted Apr 22, 2004 16:29 UTC (Thu) by proski (subscriber, #104) [Link]

Among other things, each workqueue has one or more dedicated worker threads (one per CPU) associated with it. So all tasks running out of workqueues have a process context, and can thus sleep.
This incorrectly implies that the task queue in 2.4 kernels is not run in the process context and cannot sleep. It's not true. The task queue is run in the context of the "keventd" process.

I believe the only significant difference is that the workqueue is processed in per-CPU threads whereas task queue is processed by a single kernel thread.

Correction

Posted Apr 22, 2004 16:41 UTC (Thu) by corbet (editor, #1) [Link]

Actually, 2.4 had several task queues, most of which did not run in process context. The scheduler queue, in later 2.4 kernels, runs out of keventd, but it is a single, shared thread. Workqueues have multiple, per-CPU threads which are dedicated to the queue. (When Rusty's patch goes in, single-thread workqueues will also be possible, though each queue will still have its own thread).

Workqueues also have nice features like a "flush" operation that can guarantee that your tasks are not running and will not run and a "delayed work" capability.

Driver porting: the workqueue interface.

Posted Aug 16, 2004 1:16 UTC (Mon) by baisa (guest, #24024) [Link]

How do I schedule work tasks (if indeed this is the object I should be using) to execute every timer tick? In the book, in chap 6 "Flow of Time" it lists at least 3 kernel task queues, one of which was a timer queue that dispatched its tasks each tick. But the entire "linux/tqueue.h" functionality was deleted in 2.6, and now I can't figure out how to dispatch some code every timer tick.

Thanks! Brad

Driver porting: the workqueue interface.

Posted Sep 15, 2004 7:05 UTC (Wed) by Albert (guest, #24733) [Link]

I ran into the same problem. We ran a stepper motor from the timer interrupt. In 2.6 I used the RTC instead. Here is some code for a minimal module that uses RTC to generate IRQ8 at 1024 Hz and also registers an interrupt handler for IRQ8 (via the RTC module).

/*
* stepper.c - Stepper motor driver using RTC.
*/
#include <linux/module.h> /* Needed by all modules */
#include <linux/kernel.h> /* Needed for KERN_ALERT */
#include <linux/init.h> /* Needed for the macros */
#include <linux/ioctl.h>

#if defined(CONFIG_RTC) || defined(CONFIG_RTC_MODULE)

#include <linux/rtc.h>

#define DRIVER_AUTHOR "John Doe <[EMAIL PROTECTED]>"
#define DRIVER_DESC "Stepper motor driver using RTC"

static rtc_task_t rtc_task;
static atomic_t rtc_inc = ATOMIC_INIT(0);
static int stepper_freq = 1024; /* Frequency Hz */

static void stepper_interrupt(void *private_data)
{
int ticks;

atomic_inc(&rtc_inc);
ticks = atomic_read(&rtc_inc);

/* Add some code here to drive the stepper motor */
if (ticks % stepper_freq == 0) {
printk(KERN_WARNING "stepper_interrupt: %d\n", ticks);
}
}

static int __init stepper_init(void)
{
int err;
printk(KERN_ALERT "stepper_init: registering task\n");

rtc_task.func = stepper_interrupt;
err = rtc_register(&rtc_task);
if (err < 0)
return err;
rtc_control(&rtc_task, RTC_IRQP_SET, stepper_freq);
rtc_control(&rtc_task, RTC_PIE_ON, 0);
atomic_set(&rtc_inc, 0);
return 0;
}

static void __exit stepper_exit(void)
{
rtc_control(&rtc_task, RTC_PIE_OFF, 0);
rtc_unregister(&rtc_task);
printk(KERN_ALERT "stepper_exit: rtc_task unregistered\n");
}

module_init(stepper_init);
module_exit(stepper_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);

#endif /* CONFIG_RTC || CONFIG_RTC_MODULE */

Driver porting: the workqueue interface.

Posted Jan 20, 2005 6:17 UTC (Thu) by amit2030 (guest, #27378) [Link]

Hi,

I wanted to know whether the workqueue gets scheduled before a kernel thread (created by kthread_run()). I have a function that runs in the thread created by kthread_run(). If I use the workqueue interface, then will it be scheduled before other threads created by kthread_run().

Regards,

Driver porting: the workqueue interface.

Posted May 16, 2005 0:44 UTC (Mon) by Freiberg (guest, #29960) [Link]

Could you please tell me more about Work Queue synchronization?
Are they cumulative? If I scheduled the same work several times inside
one isr handler or different isr handlers will all of them run or not and in which sequence?

Thanks,
Vlad

Driver porting: the workqueue interface.

Posted Sep 3, 2008 21:43 UTC (Wed) by peetee3000 (guest, #53726) [Link]

Note that the workqueue interface has changed slightly in v2.6.20:

http://www.linuxhq.com/kernel/v2.6/20/include/linux/workq...

Ie, INIT_WORK now takes only 2 parameters.


Reply via email to