> mount -w -oloop /tmp/blah.iso /mnt/cdrom Sorry, the iso9660 filesystem is not read/write, it's read-only, and so is the Linux driver for it. The reason for this is that in the is9660 filesystem all the table of contents and data of the files is carefully placed to minimise head movement during searches for files and data loading (remember this was all development almost 20 years ago, were cdrom drives were single-speed at best). This is the reason why you use a sophisticated program (mkisofs) to lay everything out neatly before it gets written. Random writes would cause havoc on the orderliness.
Udf was designed with writing in mind, but its seek performance is hugely inferior, on top of the fact that the Linux implementation of it sucks hamsters with straws. In theory, you can use any of the Linux filesystems on CD or DVD (yes, including reiser). You can create the filesystem image on harddisk and loopmount it, then unmount and burn it. I use ext2. Speed tests I did a while back on kernel 2.4.20/suse 8.2 indicated that an ordered ext2 does not perform worse than Linux's udf, but is a reliable backup, which the udf is not (there was corruption on the udf filesystem). Depending on the access method, both udf and ext2 can fail dysmally (i.e. be so slow as to be seriously unusable in practice), but for accessing single (or few) files ext2 is ok. The fastest way to create a big empty file for mkfs is to use the truncate() system call. There's a frontend to it in my scriptutils. The "ordered" ext2 is created by copying all files into a newly mkfs'ed filesystem in alphabetical order. The scripts cp-ext2 and cp-sort will help with this. If you're thinking of using the random-access write facilities on CD and DVD, you can pretty safely forget about it in practice. a) You need kernel patches which may or may not work reliably. b) The speed you get out of it is unusable. There is an upper limit for speed: block-read the disk to harddisk, loopmount read/write, modify, burn whole image back. At least for CDs and kernel 2.4.20, this upper limit is exceeded. I.e. it's faster to not bother with random access writes and perform steps as outlined above. The fact that it has been working well for M$ for 5 years or so doesn't mean there's a usable implementation of it for Linux, unfortunately. HTH, Volker -- Volker Kuhlmann is possibly list0570 with the domain in header http://volker.dnsalias.net/ Please do not CC list postings to me.
