http://www.cc.gatech.edu/classes/AY2002/cs3210_spring/p2.html

Project : Modifications to the current scheduler  ( kernel 2.4.16)

 

CS 3210 Spring 2002

 

Out: Thursday April 11, 2002

Due: Tuesday April 23, 2002

 

Introduction

 

For this project we are going to play around with the scheduler. For this purpose, you will first modify the current scheduler to act as a fair share scheduler. The general idea is that each user of the system can only consume an equal share of the systems resources relative to other users logged in. This finds obvious use in a multi-user system. We will be trying to give all users a fair share of the CPU time and ignore other resources like memory usage, disk space et al.

 

The second part is to modify the implementation of the standard 2.4 scheduler to maintain the run queue as a priority queue on the basis of the goodness values of the runnable processes. This is similar to the 2.5 scheduler optimizations mentioned in class.

 

You are required to turn in a short write up for both parts detailing your modifications.

 

Designing a Fair Share Scheduler

 

There are a variety of ways to design a fair share scheduler. We want to leave the implementation details up to your group. We think your design should strive for simplicity, like most of the Linux kernel, but be as elegant as you can devise. We will outline the design criteria and then give you some pointers in the right direction.

 

Problem Statement

 

There are two parts to the fair share scheduler. The first is to implement your fair share scheduler and the second is to create a userland program to show that it works. We assume you are familiar with Bovet & Cesati Chapter 10. You should be especially familiar with the schedule() and goodness() functions in the file sched.c in the kernel directory.

 

Part I: The Scheduler

 

Currently the standard Linux scheduler (2.4) is "fair share" but from a process perspective. All SCHED_OTHER processes get a fair share of the CPU with respect to each other, weighted by priority.  All real time processes are always given preference over SCHED_OTHER processes. Your fair share scheduler should try to share CPU time among sets of processes, with each set representing the processes belonging to a certain user (uid).

For example, suppose there are 3 users named A, B, and C. A is running 1 process, B has 3 processes and C has 6. The default scheduler would see 10 processes and allocate 10% of the CPU to each process. A fair-share scheduler would see 3 groups of processes and allocate 33% of the CPU to each group. Thus, A's process would get 33% of the CPU, B's 3 processes would each get 11% of the CPU and C's processes would get about 5.5% of the CPU. Furthermore, if C started 10 more processes, all his processes would get about 2.5% of the CPU while A's and B's process would continue to get the same about of the CPU as

before.

 

Implementation Details

 

Your fair share scheduler should come into play when there are no runnable real time processes or runnable processes owned by root. These processes should be scheduled as in the current scheduler.It is important that you choose a quantum that is not too small. For example, a quantum of two ticks would severely degrade system throughput (read Bovet & Cesati Chapter 10 for details on quantum size selection).

Your scheduler should provide all processes owned by a particular user an opportunity to run eventually. In other words, your scheduler should be fair between users as well as among processes owned by a single user. Do not worry about implementing priority among processes owned by a user (you may do this for extra credit).

You probably need to add a field or two to the task_struct to get it working.

 

Show Us That It Works : utop

 

Write a userland program to show that each user is consuming just his fair share of CPU time. Utop, or whatever you want to call it, will be some sort of user land program that will show that each logged in user is consuming only their fair share of resources as outlined in Part I. You could look in /proc, use systems calls, or create a special system call since we're pros at that now. Looking at the man page for ‘ps ‘ might help!

When you are testing make sure you test a variety of process groups, CPU bound and I/O bound.

One way to create some process for a single user is start them to run in the background with & after the command name. E.g. bash$ eat_cpu &.

Another way is to create a little program that calls fork(3).

 

Links to more documentation on this subject

 

a) Rik van Riel scheduler patches.

            Very informative, very pertinent!

 

b) http://resourcemanagement.unixsolutions.hp.com/WaRM/schedpolicy.html

            A good overview for why the above changes are required and how they have gone about doing it.

 

c) Kernel Documentation : sched.c is really very useful. :-)

 

d) www.google.com - For those who feel they need to know more!     

 

Part II - The priority queue scheduler

 

As in the earlier part , the implementation details will be left to your whims. I have some pointers below which you may find useful.

 

Problem Statement

 

To modify the current Linux scheduler ( 2.4.16) to maintain its runqueue as a priority queue , with the processes being sorted according to their goodness values. When the last process on the run queue has finished running, the epoch ends and the time quantums are replenished.

 

Pointers

 

The present implementation doesn't remember the goodness values so you may have to add a variable to the task struct. Also there may be a number of ways in which a process may enter or leave the runqueue. You will have to locate the appropriate entry and exit points from the run queue and treat the events appropriately.

For further reading , you can check out the O(1) scheduler patch for the 2.5 kernel released by Ingo Molnar. Careful perusal of the sched.c file will also pay dividends!

 

Implementation Details

 

All processes including processes owned by root will be affected by your scheduling policy now. You can treat 'Real-time processes' specially or just queue them at the front of the priority queue ( because of the boost of 1000 ) and run them till the end of their time quantum.

The new scheduler in the 2.5 Linux kernel also uses a similar scheme , so you can borrow some pointers ( literally! ) from it.

 

Demonstration

 

We will be happy to see that your kernel doesn't crash after you made the modifications to achieve the above end! We would be happier if you demonstrate that your kernel has organized its runqueue as a priority queue. You can print out the run queue at start of an epoch to achieve this.

 

Credit Distribution

 

Fair Share scheduler implementation

            - Make the scheduler fair from the user perspective : 20 pts

            - Show that the users are receiving a fair share of CPU time : 15 pts

The first part basically is the grade for the changes you made to the code and that your kernel doesn't crash when you load your modified scheduler. The second part is basically your demo program.

 

Run queue as a Priority Queue scheduler

            - Implementation : 20 pts

            - Showing the TA that it actually works : 15 pts

Once again, the implemetation grade is for your source code and 'non-panicking' working of the kernel.

 

Description of implementation of the modified schedulers : 10 pts

Questions during Demos : 20 pts

 

Extra credits

 

The TA's will be armed with extra credits, which will be awarded to slick pieces of code or jazzy demonstrations, or to teams which have put in lot of visible effort and emerged successful!

 

Demos

 

You can sign up on the swiki for a time by which you think you will have completed a project.The TA's will put up demo times on the swiki by this weekend.

So get your jobs in order! Happy scheduling :-)

 

 

 


Reply via email to