On Mar 8, 2011, at 5:08 PM, William H. Magill wrote: > On Mar 8, 2011, at 8:25 AM, Milo Velimirović wrote: >> On Mar 7, 2011, at 11:03 PM, Chris Murphy wrote: >>> On Mar 7, 2011, at 8:25 PM, Milo Velimirović wrote: >>>> On Mar 7, 2011, at 7:06 PM, Chris Murphy wrote: >>>>> On Mar 7, 2011, at 6:02 PM, Markus Hitter wrote: >>>>>> Am 07.03.2011 um 20:22 schrieb Chris Murphy: >>>>>> >>>>>>> I don't know the difference with XNU between block level and raw device >>>>>> >>>>>> For reading the whole thing, the result should be the same for both. A >>>>>> "cmp", "diff" or two "md5" should tell you wether I'm speaking the truth >>>>>> ;-) >>>>> >>>>> I agree. Should be the same for both with respect to performance as well, >>>>> but they aren't. 6x slower for block level compared to raw. >>>> >>>> Yes, the resulting files should be the same; no, the copy should not be >>>> the same speed! It's hard for me to believe that this confusion about >>>> block vs. character/raw devices still exists when UNIX is more than 40 >>>> years old. >>> >>> OK well, then it should be simple for you to explain why /dev/disk0 results >>> in 17MB/s read, and /dev/rdisk0 results in 107MB/s read, and on Linux >>> /dev/sda result in 107MB/s read which is the block level device and there >>> is no raw device created anymore. >> >> I can't speak for Linux, though it would appear based on your performance >> numbers that the block and raw devices were unified using the raw model. >> Historically, in the UNIX world, block devices had I/O done (surprise!) >> block at a time into a set of kernel maintained buffers and data was copied >> to/from the disk buffer cache from/to user space. Raw access bypassed this >> disk cacheing/buffer mechanism. > > In the Unix (tm) world, for a very long time, block I/O was limited to 256 > bytes at a time -- as some very deep Kernel I/O code only had 256 byte > buffers.
I don't believe it was ever 256 byte buffers. pdp11 UNIX and descendants used 512 byte disk blocks and buffers. Consult Lions' commentary or the source code itself. 256 words == 512 bytes in the pdp11. > > There were quite a few hands rung in the beginning of OSF/1. People could not > figure out why I/O was so slow on the new high capacity (at the time) disk > drives. > > Why? Because it took four 256 byte reads to read a single 1024 byte track. > It took a hardware engineer hand-wiring scope probes into a box to discover > that the "instrumentation" built into the Kernel was simply measuring all the > wrong things and at much too high a level.... and the I/O code was discovered > to have been one of the few pieces of code that nobody looked at ... simply > because it was so well written that it "worked" all the time with anything. :) This may or may not be true with allowances for block size and tracks. It seems like an apocryphal story because it occurs too late in the timeline of the history of UNIX or it may just be about some other UNIX than OSF/1. One should really look at the work done by Marshall Kirk McKusick et al, to address the problem of I/O performance. This was done when he was at CSRG at Berkeley and resulted in the Berkeley Fast File System. The FFS work was published in the early to mid 1980s and OSF/1 didn't appear until after 1988 when computer manufacturers came together to form the Open Software Foundation -- the OSF. To suggest that nobody looked at a section of UNIX code because it worked so well seems odd to me. Links to the papers describing the Fast File System are included in this wikipedia page, http://en.wikipedia.org/wiki/Unix_File_System > > By the time Linus "invented" Linux that problem had been solved. ... those > were interesting times. "invented" == built upon Andrew Tanenbaum's Minix. - Milo _______________________________________________ MacOSX-admin mailing list [email protected] http://www.omnigroup.com/mailman/listinfo/macosx-admin
