I would suggest a trigger which updates a log file and that is all it
does.

The log file or a pool of log files, would be better, to allow for
performance,broken files etc could contain standard trigger information
plus environment details/call stack etc, (can be used for auditing then.

Have a background process which filters/transforms data and updates
remove database within a transaction. Flag/delete records from log when
committed to remote database only, otherwise reprocess. I am using
xml/webservices to do this to MS Sql.

Process log sequentially, then you are able to say where process is at
and how far behind it is. Better yo use this disjointed mechanism, as
will not slow down normal user processing to much as trigger is just
doing one more write. Not waiting for a commit to a remote database.
Disjoint process also allows for any part to be down.

Mutliple log files, one current allows for broken files, as if you are
captruing every write the log file will get large very quickly.

Cheers,

Phil.



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mike Randall
Sent: Saturday, 29 July 2006 2:01 p.m.
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Replication Between Unidata and MySQL

First thing that comes to mind is an update trigger on the Unidata side
that captures every write attempt.  The trigger could compare the before
and after versions of the record and execute a call to a SQL update
process that
you come up with.   The trigger could normalize your data (or whatever
you
needed done) and could be added with no impact to your application.

Mike 

>From: "Kevin King" <[EMAIL PROTECTED]>
>Reply-To: u2-users@listserver.u2ug.org
>To: <u2-users@listserver.u2ug.org>
>Subject: [U2] Replication Between Unidata and MySQL
>Date: Fri, 28 Jul 2006 14:30:56 -0700
>
>I have been assigned a unique project and have been given some pretty

>stringent requirements.  Basically the project involves a subset 
>replication of a Unidata database into MySQL.  As certain records 
>change in Unidata (6.0) that record is to be flushed to a separate 
>server running MySQL.  Off-hours batch updates are not an option at 
>this point, inconsistency between systems is intended to be momentary

>at best.
>
>I can handle the conversion and flattening of the data; that's 
>certainly no deal breaker but keeping the MySQL server updated on a 
>near-realtime basis has me a bit freaked out.  Has anyone handled
this 
>kind of thing?  Is there a connector to MySQL that can be addressed 
>from within Unidata?  I could certainly do something with ODBC I
would 
>figure but with Unidata running on AIX I'm not sure how exactly I'd 
>move the data to the MySQL server (running on a Win box) directly.
>
>Other options off the top of my head include
>
>* ...using a http server to update the MySQL instance and using the 
>callHttp interface or...
>* ...writing a TCP listener to do the updating of the MySQL instance 
>and using Unidata sockets to move the data.
>
>However, these will necessitate a bit of code that I'd prefer to
avoid.  
>What would you do?
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to