> Hmm, one for doing my own locking, one against it. As this seems to be
> an obvious issue in any network, I wonder why the network developers
> have not appropriately addressed this issue. 

They have.  In the early 80's when network filesystems were invented they were 
incredibly slow.  So incredibly slow that it was often more effective to use 
speedy-memo's, interoffice pneumatic post, and secretaries (what are now called 
administrative assistants were then called).  It was realized, however, that in 
fact there were "business rules" which precluded multiple people from updating 
the same things at the same time (this occurred primarily for files on the 
network, at least initially).

By taking advantage of this knowledge, one could make the filesystem 
"optimistic" in is operations -- it could "optimistically" assume that a given 
filesystem object was only ever going to be updated by one person at a time, 
and therefore provide better performance in this case.  If the "optimism" was 
wrong and all hell broke loose, it was blamed on a failure to understand the 
design of the system by the application pedlars.  In order to avoid having 
foolish people do things that simply would not work, the concept of a "shared 
file" became one of a default to "exclusive" use.  Application writers then got 
around this by using "shared read" (which is not a problem) and used local 
working files.  Only the "first opener" got "exlusive write" access to the 
file.  Most applications still work that way to this day.

Then in the 90's (and subsequently) the snake-oil salesman got involved and 
developed all sorts of snake-oil "WAN Optimizers" and other technology which 
improved things for the "single updater" but did nothing for "multiple 
updaters".  The snake-oil got thicker and thicker and it became impossible to 
turn it off.

Then in the 2000's some snake-oil vendors decided that they would allow "shared 
write" and attempted to implement robust methods to make this work.  None of 
which work worth a pinch (or big pile) of male bovine excrement.

This is the situation which exists today.  Only today there is yet more male 
bovine excrement in the form of various snake-oil filesystem acceleration 
technologies which make even "legacy" style (exclusive access) not work 
properly.

Through all this, "local" access to a filesystem object (File) "exclusively" 
worked properly.  This is why there are client/server applications.  The 
"server" part of the application exclusively talks to the local filesystem 
object, and the "client" applications talk exclusively to the "server" portion 
of the application, which makes the changes desired to its local filesystem 
object.  This is how "network applications" are addressed and is the only way 
of addressing the issue which works.

So, if you want to have a shared database located on a machine other than the 
machine on which the database file resides, you need to use a Client/Server 
implementation.  It is impossible (or at least very very very very difficult -- 
with the cost far exceeding the cost of simply buying a client/server system 
database designed for the purpose).  To do otherwise is inviting disaster 
(unless you know precisely and exactly what you are doing).

> Looks like I need to
> research this problem more before implementing. I dislike probability
> games of designs that will work most  of the time, but have a potential
> collision scenario. Such is why so many applications have the occasional
> crash or corruption.

Often that is the result of believing glossy brochures without having the 
underlying fundamental engineering knowledge of how things are implemented in 
the real world. 



Reply via email to