Below is a message I received from the DataPerfect discussion



Marcos Fávero Florence de Barros
Campinas, Brazil

Date: Tue, 19 Jun 2012 13:54:57 +1000
From: "Brian Hancock" <brian.hanc...@brileigh.com>
To: <datap...@dataperfect.nl>
Subject: Re: [Dataperf] Networking DP

Hi Marcos,

MSDOS, and presumably FreeDOS, does not natively provide a framework for
safe filesharing and file locks, this needs to come from the server software
working in combination with requests from the application via the client
networking. Normally when a simple DOS application opens a file it opens it
in a specific mode, eg read or read/write, and when a file is opened in
write mode it is opened exclusively, so any process that tried to open the
file is given an access denied.

When task switching application, multitasking or networking came about a DOS
application share.exe (later share.com and vshare.exe, vshare.vsd) wedged
into the operating system (a TSR) and allowed 5 access modes: deny none,
deny all, deny read, deny write and another mode for programs that did not
like how share.exe worked. Share.exe maintains two tables in memory, one for
each file that is opened, and another for which portions of the file were
locked, and what mode the portion was locked in

File server operating systems such as Novell Netware and Unix, Windows
Server etc, have their own layers which provides this file locking
mechanism. Early Windows used a variant of share.exe for its locking

So when a client application, which can be running on the same machine or a
different network machines requests a file to be opened it communicates with
the networking client to make the request, so that the entire file is not
locked when someone opens it, and so only a specific portion of the file is
loaded into memory and locked.

For advanced server application the server can cache the reading and the
writing of files. Caching can be dangerous because a process that has opened
a part of a file and has cached it in memory may not be aware that another
process has also tried to modify the same segment of data, however in
advanced fileserver operating system this conflict is resolved within the
fileserver operating system and manages this contention and gives the best

A bigger problem however is where the client application or rather the
client networking software decides it can increase the performance by
caching file reads and write, so instead of immediately requesting the
server to write the data to disk it waits a moment until it has more time,
when the cpu's utilisation is a little lower. In many application this can
speed up the application, however in database application it is almost a
death wish.

I have run DP over many many different server and networking client
combinations, with up to about 40 simultaneous connections and not ever had
a problem unless I was running share.exe on the client or a setting on the
client networking software was to enable network caching.  For a short while
one of the Novell Netware clients for Windows 3.x had client caching of
network reads and writes turned on by default, and this was the only time
that I have ever had DP data corruption. The major corruption over that
period was with the STR file which keeps track of the consecutive numbering
of records, and I kept getting duplicates, but I also got other random
corruption problems with TXX and the index file. The other application that
I know of which suffered badly with this setting was Lotus Notes. Once the
culprit was found and caching turned off everything worked fine.

Although this might not be the problem it might help you in isolating where
some of the problems might lie.


Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
Freedos-user mailing list

Reply via email to