[EMAIL PROTECTED] wrote:
> Good stuff Adrian.  I've pretty much decided on Unidata triggers to
> figure out what changed and write to a queue file and then have some
> program pulling from that queue to flush to MySQL.  But I was hoping
> that I could do a lot of this in Unidata and I'm fearing I'm gonna
> have to write something in AIX that pushes to MySQL.  Not that it's
> all that difficult, but damn I've been spoiled by Unidata.

Kevin,

As Adrian says, one of the issues you need to deal with is that while you'd
like to pump your changes straight over from UniData to mySQL from your
triggers in order to miminise the latency between the systems, you need to
deal with the possibility of the mySQL database being unavailable, or the
network being down.

Adrian has mentioned MQ (or Websphere, or whatever IBM are calling it
today).  If that were an option, then it would give you guaranteed delivery
and your triggers could just offload the data with no worries about interrim
storage.

Otherwise, my experience is that the best thing to do is to use triggers to
log the change locally in another UniData file, or even to normalise it into
a number of UniData files, and then have a separate process constantly
running, trying to move data across from UniData into mySQL.  Such a process
could either be a UniData phantom using   UCI and some sort of Unix ODBC
bridge to push the data into mySQL, or if you create some normalised UniData
tables to store your deltas in the interrim then it could also be a Windows
based process reading out of UniData via ODBC and updating mySQL.

Cheers,

Ken

> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Adrian
> Merrall

>> I have been assigned a unique project and have been given some pretty
>> stringent requirements.  Basically the project involves a subset
>> replication of a Unidata database into MySQL.  As certain records
>> change in Unidata (6.0) that record is to be flushed to a separate
>> server running MySQL.  Off-hours batch updates are not an option at
>> this point, inconsistency between systems is intended to be momentary
>> at best.
>
> This sort of stuff keeps life interesting.
>
>>
>> I can handle the conversion and flattening of the data; that's
>> certainly no deal breaker but keeping the MySQL server updated on a
>> near-realtime basis has me a bit freaked out.  Has anyone handled
>> this kind of thing?  Is there a connector to MySQL that can be
>> addressed from within Unidata?  I could certainly do something with
>> ODBC I would figure but with Unidata running on AIX I'm not sure how
>> exactly I'd move the data to the MySQL server (running on a Win box)
>> directly.
>>
>> Other options off the top of my head include
>>
>> * ...using a http server to update the MySQL instance and using the
>> callHttp interface or...
>
> If your webserver is down you then need to cache locally.
>
>> * ...writing a TCP listener to do the updating of the MySQL instance
>
>> and using Unidata sockets to move the data.
>
> Similar to the above - if the listener is down you loose your data.
>
>>
>> However, these will necessitate a bit of code that I'd prefer to
>> avoid.  What would you do?
>
> We use something similar to triggers.  Instead of writing directly to
> a file, our file writes are via a subroutine.  This subroutine writes
> to the file and writes what we call replication records to a another
> file.  We actually write a header and a data record (for a delete
> there is only the header record).  If I was doing it again I would
> look closely at UD triggers.
>
> Then another process runs almost all the time polling this file and
> sending the records.  If you wanted to cut down the lag you could do
> both, attempt a direct send and if this fails, cache locally for
> delayed send.  You would have to be careful with versioning to ensure
> a subsequent direct send didn't get clobbered by a delayed cached
> message update.
>
> Gotchas.
> If the destination box or the transfer process are down, your local
> message cache can build up really quick - make sure the stop, clear
> and recovery process are well understood.  Its bad news when
> replication to another box takes down your production server.
>
> Lost updates.  You may need some kind of validation/recovery process.
>
> We currently move the messages with scripts at the os level (linux)
> and its a bit clumsy but I'm in the early stages of looking into using
> jms and the apache activemq software.
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to