> David, Great points - but, respectfully, does it ever actually happen? Unless 4D prevents it from happening, then it can happen. So, unless 4D groups multiple lines of code together in a block, you can get different processes interleaving in overall execution order. Last I heard, can split execution across processes between lines. (I'm use lines a bit loosely here as what we write of 1 or 10 lines of code may translate into something else once compiled.) That's precisely why you would need a lock. Particularly compiled.
> Can you write an example db that does this? Probably, I've seen it done in the past. Would I now? No, I wouldn't bother trying. Unless 4D *guarantees* the non-standard behavior you're hoping to exploit, it's risky at best. They don't offer any such reassurance. Also, I pretty rarely used IP arrays for shared objects for much of anything...I just don't have a lot of situations where that's the best solution. > And bear in mind I'm talking about conditions on a client - not the server. Doesn't matter. It's a multi-process machine, either way. > And I'm not talking about situations where an external client might be > attempting to set variables either. You mean SET PROCESS VARIABLE, etc.? I forgot about those. I never use them. While they can be used without trouble if used correctly (and very carefully), they demonstrate a pretty profound violation of any sensible concept of scope. I mean, it's bad enough that we don't have private vars and functions without making it *worse* by giving people a stick to poke inside of other processes with. Left up to mean, you wouldn't even allow for public variables, everything would be set through functions. (Can you tell that I'm reading Bertrand Meyer again?) Those commands just plain suck. They're like bad globals on steroids. Thank goodness V16 is offering a very nice alternative for what people (mis)used SET PROCESS VARIABLE for in the past. I understand the temptation to think that race conditions won't happen to you. It's pretty easy to think that way. But since they can, chances are that they eventually will. That brings you back to what to do about it. Again, sometimes the situation is harmless (you lose so low-quality/low-value data), other times it's a big deal (you crash writing to an element that doesn't exist and/or you scramble key tracking data.) Like any risk, the first question is "what harm is done if things go wrong?" It's not worth much effort to prevent a problem that causes no harm. On the other hand, if the outcome is going to be bad, is it worth a risk? And if The Bad Thing happens, how will you even know? How will you recover? Can you even detect it or recover? Again, it depends on what you're working with. And, for the record, I've seen tons of 4D code that loads records for writing without checking that their writable, either because of record locks or the table being in read-write. That's easy to understand and, in some ways, a less dire problem...but would anyone suggest it's a strong design or a reliable approach? I hope not. (Again, unless it's data that doesn't really need to be 100% correct or complete. There's plenty of that out there in the world.) ********************************************************************** 4D Internet Users Group (4D iNUG) FAQ: http://lists.4d.com/faqnug.html Archive: http://lists.4d.com/archives.html Options: http://lists.4d.com/mailman/options/4d_tech Unsub: mailto:[email protected] **********************************************************************

