On Wednesday 21 November 2001 17:20, Stefan Doehla wrote:
> Hi!
>
> Der Herr Hofrat wrote:
> >>With ext2fs I lost sometimes very important source code files
> >>- this shouldn't happen with ReiserFS (and ext3fs) ...
> >
> > That can happen with reiserfs aswell - Reiserfs will not fuss at you
> > at bootup but if you have a large filesystem then you will see random
> > 0 size files apear after a few hard-lockups (not rtl related you can
> > get this from "normal" kernel lockups or powerfailure aswell) - the
> > bad thing about reiserfs is that you will not notice it until you
> > happen to stumble over the file when you need it.
>
> That happened to me with ext2 ...
> When you must answer some 100 times (y) to 'delete inode' you will
> press and hold the enter key ... - now I have no screensaver, ...
> ;-)
You're not like getting a bad feeling in your stomach or something when
you do that kind of things? ;-)
(OTOH, at that time your machine dies, the damage is already done for all
practical matters - fsck just cleans up so the fs won't crash your system
when accessed.)
> But a journaling filesystem gives me at least the chance to save my
> data and to start afterwards the realtime program.
Perhaps you should sync as well... (Just to make sure your last changes
aren't thrown away as "incomplete journal entries".)
> > This is quite well reprocuable on high load fs-trafic - mount a
> > system via nfs (100Mbit) and cp -a /NFS_SRC /NEW_LOCAL_DIR , lockup
> > the system and reiserfs will not be able to balance the trees any
> > more -> files are corupt or lost.
> >
> > The other problem with reiserfs (havent tried ext3) is that during
> > tree balancing the fs stalls thus if you are doing heavy data loging
> > you will loos data. That is a problem that also botheres other heavy
> > loging appps like network logers on reiserfs (argus, etc.)
>
> Good data logging isn't the playground of reiser.
I guess that easily happens with any jfs - just like disk caching
optimized for the average case just gets in the way when streaming
multiple files to/from disk "at once".
[...]
> Is there one filesystem that's "best for realtime"?
I wouldn't think so. Maybe there are some good compromizes that work in
most cases, but there can be no single "perfect" solution, as there are
so many different ways a real time application can use a file system.
Just for starters
* Safe logging:
- Must survive system crash/power loss at any time,
without critical damage.
- No entries may be dropped. (That is, if
write() returns "ok", the data *must* be safe.)
- Moderate bandwidth requirements. (Or a very, very
fast SCSI disk will be required! :-)
* Fast logging:
- No crash safety needed.
- Entries may be buffered for performance.
- Bandwidth close to the disk subsystem's
theoretical sustained rate can be achieved.
* Single channel streaming:
- No "instant start/seek" requirement.
- Heavy buffering can be used ==> high bandwidth.
- Rather "trivial" - no special APIs needed.
* Multiple channel streaming:
- No "instant start/seek" requirement.
- Heavy per stream buffering *required*!
- Cannot cooperate nicely with standard buffer caches.
- "QoS" API required for clean and safe control.
> I guess realtime means always: filesystem (and disk writes/reads) are
> Linux part -> lowest priority ...
Yeah... And even if the drivers were under RTL, there wouldn't be much
difference, as hard drives are non-deterministic by design. Indeed, there
is a "worst case" access time (approximately the time it takes to seek
accross all tracks + duration of one disk revolution), but that's not all
there is to it. Write attempts may fail and require any number of
retries. (Until the disk gives up and reports an error, which should
generally be interpretted as "Help! I'm dying!") Modern disks also use
"spare" tracks as replacement for bad sectors (practically all high
density disks have bad sectors!), which means that you'll occasionally
end up with extra seeks ==> the "worst case" seek time as defined above
is in fact bogus.
> If there's a heavy load realtime thing -> no filesystem operations.
That, OTOH, is an error condition in the real time system, rather than an
OS design or file system problem. If you burn more power than you've got,
you're not going to get the job done properly no matter what.
Just give Linux the time it needs, and disk I/O will eventually be
performed. With realistic deadlines, you might actually get away with
treating the disk subsystem as a hard real time service! So what if the
worst case response time is *10 seconds* - if you buffer accordingly,
your system will work.
Even Windoze can play gigabytes of sampled instrument sounds from disk as
an "instant" response to MIDI events (*), so these kinds of systems are
far from theoretical possibilities. (No, it's not based on a time
travelling device. It's just a neat pre-caching trick, combined with
"ordinary" multitrack hard disk playback. See "EVO" for a Free/Open
Source implementation.)
(*) In fact, the disk subsystem is a minor problem in that case - the
real time mixing engine is where it gets *really* painful. It is
doable, but hey, if I'm going to hack in kernel space, I'd rather
do it under Linux. But I don't even have to do that, as there are
now ways to get better performance in Linux user space than you
can ever get *anywhere* in a Windows environment.
//David Olofson --- Programmer, Reologica Instruments AB
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
`----------------------------> http://www.linuxdj.com/maia -'
.- David Olofson -------------------------------------------.
| Audio Hacker - Open Source Advocate - Singer - Songwriter |
`-------------------------------------> http://olofson.net -'
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
--
For more information on Real-Time Linux see:
http://www.rtlinux.org/