> With which kernel and arch have you run your tests? The test was conducted on a Debian 6 and Debian 7 machines. Kernel used in Debian 6 was from backports, 3.2.46-1~bpo60+1, in Debian 7 it was 3.2.60-1+deb7u3. Architecture was amd64.
> And how did a small read-ahead request ended up taking five *minutes* on > your system? Is it that badly IO-starved? Although the devices continuously do IO, it's not actually IO-starved. The systems are fully responsive. An example iostat from one of the machines: Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 42.70 0.00 2350.40 0 23504 There might be something about the drive make/model, but surely it's not a drive failure (affected almost the whole datacenter). The drives are mostly Intel SSDs, though I remember having at least one machine that encountered this on a regular HDD. > Because it was never supposed to take 5 minutes in the first place... Yes. > I don't mind the patch, but it would be nice to understand the problem > better, first. Maybe we can disable the fadvise selectively... Well, as per the linked linux.kernel mailing thread, the posix_fadvise calls "are not guaranteed to be non-blocking". If the idea of using the call is to let the kernel know we would use the data so it can prefetch it, then waiting for the kernel to return with *something* doesn't make sense at all. At least on the machines in question it turned out the kernel returns the data faster than it returns from the fadvise call. Why? Is it because prefetching has a lower IO priority in the queue than a regular read? Can't say. Btw, I think the call is the same in 1.16.0, so it would affect all insserv versions out there, however there's no way for me to verify it. Best regards, Bolesław Tokarski _______________________________________________ Pkg-sysvinit-devel mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-sysvinit-devel

