On Sun, Dec 4, 2016 at 1:36 PM,
​
​
Kirk Brooks <[email protected]> wrote:
>
> Hi Steve,
> Thanks very much for posting this. It's exactly the kind of real-world
> experience I was hoping to hear.
>
A couple of questions that come to mind:
> So for the exporting it sounds like you're using the basic export record
> function.
>

​On the whole no.
Export Record is better suited for new records than for modified records
and not as malleable if I want to embed some sort of logic based on an
internal field content.
​ So mostly I use a special "Export_Import" table FORM with every field on
it in the order I want and copy that form to the satellite and "export
text" using that form and "import text" using the same form.
Strangely I
often find
​ that operation faster than using 4d Export/Import Record. PLUS it gives
me a human readable file if there are some "mysteries" to be solved.
"Sometimes low tech is better tech". Nothing to be ashamed of
.
​


> Have you upgraded any of the these to v15 with the new journaling features?
>

​"Journaling" is not meaningful in our environment of master/satellite the
way we implement them. Or at least not within my understanding of
journaling capability. I like to have more control than that suggests.

How are you resolving unique record ID issues and related records?
> ​
>

​That was always a biggie that had to be planned out with caution. Over time
 I found that I could put "starting" record IDs for each location in a
range separated by "millions". Sometimes 5M or 10M, or more. Obviously not
foolproof, but in the "real" world, so far over 35 years none of those
original master/satellite database pairs have overlapped. (and frankly,
none of the remaining ones still running ever will!) The key was having a
central place where those starting numbers were originated and kept track
of. These days, I don't worry about it. I just use the 4D UUID auto
generated field ID, regardless of it's location of origin.


> One thought that comes to mind for keeping track of the changed records is
> to use the On saving an existing record trigger to either add a record to a
> tracking table or simply write a table/record reference to a log file and
> then use that log file to prepare the next export batch.
>

​Nah. Much simpler. Just update a "DateTimeStamp" field (in every table,
every record) using a trigger on new & modified records. And a separate
list of "deleted" records on a "delete"  trigger. When the sync wakes up,
search for every record with an older DateTimeStamp than "right now", move
the "next" Updater DateTimeStamp to "right now", and export found records.
Records in transition (those in the process of being created or modified by
current users) will get a new DateTimeStamp when it is saved, and therefore
will show up in the next sync (anything 1 millisecond after or equal to
"right now"). The current deletes list is added to each "updater" and
simultaneously zeroed out locally, ready to start the next sync operation.
On the receiving end, each "deleted" record will be deleted if it exists.
(Retain that deleted list for a while. It is the weak link in this sort of
thing since the original record will no longer be found unless you go the
more complex "tag deleted records but don't really delete them" route.)

The central key here is that this is not a "journalling" sync operation -
or "mirror" operation. It's a simple "updater" operation, with unlimited
logical needs at each end depending on the specific project and it's own
unique needs for master/satellite updation needs, sometimes in a two way
manner.  I'm a fan of "low tech" for these operations.
------------------------------
Steve Simpson
Cimarron software
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:[email protected]
**********************************************************************

Reply via email to