Hi Woody,

Unfortunately, using the internal IDE is out of question, since the
whole process in that company is used to take that stick after the
salestour, go into the office and plug it into a transfer PC, where
some other software reads in all sales data, then updates the stick
with new tour-data and maybe App- and Freedos updates.
Well, you probably have a lot of control over the software,
so you could modify it to just use the USB stick as vessel
to transport sales data, tour data and updates while the
software installation resides on the internal disk. Those
sticks could boot into a tool which updates app and tours,
checks the filesystem and creates a flag/lock file when
done. Booting from the stick again would instead update
sales data on the stick. In the office, people could have
a tool (maybe a simple batch script) which gathers sales
data from the stick, puts new updates and tour data on it
and clears the flag/lock file again for the next tour etc.

Still pondering non-USB and non-stick solutions:

You may also install "extension cords" so people can plug
CF cards as IDE storage. My PC has a Wechselrahmen which
does let me connect a CF card with a mechanical adapter.
I have stopped using it years ago, but you get the idea.
There also are adapters with controller, for CF on SATA.

Think about Delock 91687 or 91624 which only need a free
slot bracket. Unfortunately, the T510 does not have any
space for it, even a SATA cable would need a custom hole.

Neither CF as IDE nor USB sticks are hot-pluggable in the
DOS and BIOS scenario you are using, so there is not hot
plug ability to lose, but getting rid of USB complexity
still feels like a good thing to try out.

Another possibility would be to replace the USB sticks by
something which is less sensitive to power outages. Server
SSD with supercaps come to mind, connected by SATA or via
a simple USB enclosure if you must stick to using USB. Or
you could improve the power supply hardware infrastructure.

That 4Mb XMS limit was just because FoxPro doesn't need more.

There are few apps which get confused when they get too
much XMS (for example more than 32 MB of XMS 2.0, or more
than 2 GB of XMS 3.0) but I would not manually set any
limit unless you really have to to "protect" old apps.
Windows 3 also does not like too much RAM, by the way.

As said, I suggest that you do not use EMM386 style
drivers unless you actually want EMS or UMB. If not,
it reduces complexity to only use HIMEM style drivers.

mediumtypes for the USB: Floppy, Zip and Harddisk.

Those are basically predefined sets of CHS geometry.
Floppy goes up to 2.88MB, ZIP is more like harddisk.
You usually stick to harddisk and hope that BIOS and
OS will use LBA instead of CHS anyway, to avoid any
confusion about which geometry would be the best.
You can also boot read-only CD/DVD or their images.

Most FreeDOS bootsectors (SYS) and kernel autodetect LBA
support of the BIOS, but for the FAT32 bootsector, some
fixed choice is made when you run SYS, no boot autodetect.

The issue of caches, crashes and power outages:

You do not have to worry about the read-caches available
in DOS, so you should be safe if you close files AND call
those disk reset and cache flush calls after that as long
as you only get crashes between flushing and the beginning
of the next write. You can try to postpone writes during
periods when crashes (engine restarts) are likely. This
will protect you from getting half-written broken data.

As said, DOS itself is barely able to cache anything in
the sense of delayed or pooled WRITES. You can actually
improve performance using READ caches like LBACACHE. It
should not impact your data corruption problem. Only a
WRITE cache would make your problem larger.

Unfortunately, basically ALL storage media apart from
floppy disks have intrinsic write "caching" in the sense
that your data will get converted and sent to the actual
disk or flash chip AFTER it got sent by the computer.

So if your computer crashes while working on your files,
unsaved or partially saved data gets lost or, worse,
corrupted for obvious reasons. But if POWER to your
drive is lost, MORE data can get lost. Some drives
use backup energy (supercap or, for harddisk, using
their mechanical energy) to mitigate that extra loss.
USB sticks, however, are not in that category. Neither
are CF cards. Their advantage is just having fewer
interface and data conversion layers in the pipeline.

Drives also tend to be configurable concerning whether
they are allowed to pool or cache written data. Other
people here may be able to recommend tools for this,
but I would say it is hard to control those settings
from DOS for USB drives and more feasible for SATA or
IDE drives. In Linux, you would use HDPARM for this.

Some drives (in particular some flash models, both SSD
and USB sticks can suffer from this) also corrupt more
data than necessary or even get completely bricked if
a sudden power loss interrupts internal bookkeeping.
This will depend on brand and model.

Software recommendations:

Either way, I recommend to update your app to carefully
avoid periods where crashes (engine restarts) are likely
for database updates. Given that you already took care
to shorten periods with files open for writing, I think
adding explicit disk reset and flush calls at the end of
each write process can also reduce data loss risks. Sort
of "write fencing", if you like.

An old example from my own apps: I collected measurement
data in RAM until a non-time-critical period was reached,
then opened the DOS logfiles to write data. The result
was that measurement timing was barely disturbed by the
unpredictable work of DOS dealing with disk contents.
A different motivation, but a similar strategy.

Given that FAT is not a journaling file system, it will
react badly to metadata writes which get interrupted half
way. Interrupted file content writes will instead result
in half-updated files - without leaving warning signals.

If you have control over that, you could make sure that
database file sizes never change (pad to a fixed size?)
and that no files are created or deleted. Those are the
types of metadata changes which most likely damage the
file system. But as said, corrupted file contents can
still be a pain even with that metadata "protection".

Given that the hardware has a lot of space and speed, you
could also add some sort of redundancy, rotating backups,
extra sanity checks and so on to the software. You cannot
easily give DOS a journaling filesystem, but you could
use multiple copies on several partitions and hope only
some of them break. Or you could copy the database to a
RAMDISK and export it back to rotating copies on internal
or USB storage, carefully selecting moments where power
outages are unlikely for the copy process. That way, you
get well-defined losses when a crash happens, and lower
the risk of ending up with "half-working" files on disk.

Regards, Eric





_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to