* Prasad Joshi <[email protected]> wrote:
> I am not sure how to induce the delay you mentioned. [...]
In the simplest version just add:
if (debug__io_delay)
udelay(1000);
to the code that does a read() from the disk image. This will
introduce a 1 msec delay - that should be plenty enough to simulate
most effects of IO delays.
Later on you could complicate it with some disk geometry
approximation:
delay = read_size_in_kb; /* 1 usec per KB read */
delay += seek_distance_in_kb; /* 1 usec per KB disk-head movement */
udelay(delay);
Where 'read_size_in_kb' is the number of bytes read in KB, while
seek_distance_in_kb measures the position of the the last byte read
by the previous read() call to the first byte of the current read().
( Also, instead of copying the disk image to /dev/shm you could add a
debug switch to mmap() and mlock() it directly. Assuming there's enough
RAM in the box. )
But i'd strongly suggest to keep the initial version simple.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html