Sure, it's absolutely possible to encounter a race condition if you write
to a shared resource (IP arrays in your case) without a locking mechanism.
It's likelihood increases in compiled mode and as you add more
participants. Even if you can't predict how often you'll encounter a race
condition, you already know that the chance is greater than zero.

A few more points:

* What are you risking? If your data is written incorrectly because of
out-of-expectation sequencing, what happens? Do you crash? Do you get
screwed up calculations? If you don't care about the errors, and they don't
harm you...perhaps it doesn't matter. For example, if you're writing to an
event log and lose a few entries every 10,000, perhaps it doesn't make any
real difference. On the other hand, if an error will cost you, why would
you risk those 2 out of 10,000 (or whatever) errors? Keeping in mind *they
will be very hard or impossible to detect or correct after the fact.* I
don't see the upside.

* Race conditions are, by nature, a bit unpredictable and *very* hard to
diagnose after the fact.

* Hoping that it won't happen isn't a sound strategy. Maybe it won't, maybe
it will. It definitely might.

* If you don't feel like writing your own locking scheme for IP variables,
think about using records instead of arrays. 4D already has an easy to use
locking system for records.

* IP arrays are often used for tasks that would be better served by a
queue. If so, write a real queue using NTK IPC channels, or some other
scheme. (There are several options.)

* If locking your IP arrays is a performance bottleneck, you probably need
an alternative design, not a low-quality implementation of shared resource
management.
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:[email protected]
**********************************************************************

Reply via email to