Good stuff Adrian.  I've pretty much decided on Unidata triggers to
figure out what changed and write to a queue file and then have some
program pulling from that queue to flush to MySQL.  But I was hoping
that I could do a lot of this in Unidata and I'm fearing I'm gonna
have to write something in AIX that pushes to MySQL.  Not that it's
all that difficult, but damn I've been spoiled by Unidata.

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com
 
** Check out scheduled Connect! training courses at
http://www.PrecisOnline.com/train.html.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adrian
Merrall
Sent: Friday, July 28, 2006 3:54 PM
To: [email protected]
Subject: Re: [U2] Replication Between Unidata and MySQL

Kevin,

> I have been assigned a unique project and have been given some
pretty 
> stringent requirements.  Basically the project involves a subset 
> replication of a Unidata database into MySQL.  As certain records 
> change in Unidata (6.0) that record is to be flushed to a separate 
> server running MySQL.  Off-hours batch updates are not an option at 
> this point, inconsistency between systems is intended to be
momentary 
> at best.

This sort of stuff keeps life interesting.

>
> I can handle the conversion and flattening of the data; that's 
> certainly no deal breaker but keeping the MySQL server updated on a 
> near-realtime basis has me a bit freaked out.  Has anyone handled
this 
> kind of thing?  Is there a connector to MySQL that can be addressed 
> from within Unidata?  I could certainly do something with ODBC I
would 
> figure but with Unidata running on AIX I'm not sure how exactly I'd 
> move the data to the MySQL server (running on a Win box) directly.
>
> Other options off the top of my head include
>
> * ...using a http server to update the MySQL instance and using the 
> callHttp interface or...

If your webserver is down you then need to cache locally.

> * ...writing a TCP listener to do the updating of the MySQL instance

> and using Unidata sockets to move the data.

Similar to the above - if the listener is down you loose your data.

>
> However, these will necessitate a bit of code that I'd prefer to 
> avoid.  What would you do?

We use something similar to triggers.  Instead of writing directly to
a file, our file writes are via a subroutine.  This subroutine writes
to the file and writes what we call replication records to a another
file.  We actually write a header and a data record (for a delete
there is only the header record).  If I was doing it again I would
look closely at UD triggers.

Then another process runs almost all the time polling this file and
sending the records.  If you wanted to cut down the lag you could do
both, attempt a direct send and if this fails, cache locally for
delayed send.  You would have to be careful with versioning to ensure
a subsequent direct send didn't get clobbered by a delayed cached
message update.

Gotchas.
If the destination box or the transfer process are down, your local
message cache can build up really quick - make sure the stop, clear
and recovery process are well understood.  Its bad news when
replication to another box takes down your production server.

Lost updates.  You may need some kind of validation/recovery process.

We currently move the messages with scripts at the os level (linux)
and its a bit clumsy but I'm in the early stages of looking into using
jms and the apache activemq software.

HTH

Adrian
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to