Tony, I wish I had some say about architecture but at this point
everything is being dictated to me, language, database, everything BUT
transport because - I fear - the people making the mandates are more
clueless than the national average.  Definitely something to think
about if I can sneak it in under the radar.  Let talk inside of the
next couple of weeks.

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com
 
** Check out scheduled Connect! training courses at
http://www.PrecisOnline.com/train.html.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tony Gravagno
Sent: Friday, July 28, 2006 8:33 PM
To: [email protected]
Subject: RE: [U2] Replication Between Unidata and MySQL [ad]

Kevin, you can use triggers for the first part as Adrian suggests.  As
always I'll recommend mv.NET to do the second part.  When you put your
data into a queue file, you can simultaneously log an action item into
a queue for mv.NET.  This will tell a new external routine what to
pick up from the queue file and what to do with it.  In this case,
read data from the queue file, update MySQL, and on confirmed update
remove the items from the UD queues - you don't need to worry about
lost updates.

Yes, some code is required, but such is the price we pay for
sophisticated data manipulation between environments.

The MVExec freeware on my website requires mv.NET to communicate with
the DBMS but here is how it can be used in this case:
- You have a program running over Windows in any language of your
choosing, Perl, PHP, VB, Java, etc.
- You use MVExec to query for data and pull a single record over if
there is anything scheduled to go to MySQL.
- You query MySQL and post the update.
- You use MVExec again to remove the trigger item.
Loop as required.
This process can be streamlined to pull the entire queue from unidata
in the form of a large SQL INSERT or UPDATE query.  Then all you need
to do is execute the query.  On success just delete the UD items.
Similarly you can loop on the server data, build one long query on the
windows side, then just execute it.

As you can see, there are a number of ways to implement this.  The
stopgap in the current thinking is the idea that Unidata needs to
communicate with the remote server.  If you leave the work to a middle
tier, which is where the MySQL environment is running anyway, then all
of the problems go away.

Personal comment: In a weird sense I'm getting as tired of
recommending mv.NET as some of you people are probably tired of seeing
the recommendations, and I often just don't jump in to offer related
solutions for just this reason.  But doesn't it tell us something that
there is a consistent answer for so many of these commnications
problems?  Please remember that I came to this software as a user
because I didn't find anything else in our market that answered all
the questions.  Obviously after all of these years people are still
asking all of the same questions!
So after investigating this software and becoming comfortable with its
"depth" I decided to sell it and related services.  I did not just
jump into this market as a vendor, and my goal is not to just sell
software.  I try to share solutions that I've found to common problems
and I hope some people here will benefit.

Kevin, I'd be honored to work with you to make this happen.

Tony
TG@ removethisNebula-RnD.com

Kevin King wrote:
> Good stuff Adrian.  I've pretty much decided on Unidata triggers to 
> figure out what changed and write to a queue file and then have some

> program pulling from that queue to flush to MySQL.  But I was hoping

> that I could do a lot of this in Unidata and I'm fearing I'm gonna 
> have to write something in AIX that pushes to MySQL.  Not that it's 
> all that difficult, but damn I've been spoiled by Unidata.


Adrian wrote:
>> However, these will necessitate a bit of code that I'd prefer to 
>> avoid.  What would you do?
 
> We use something similar to triggers.  Instead of writing directly
to 
> a file, our file writes are via a subroutine.  This subroutine
writes 
> to the file and writes what we call replication records to a another

> file.  We actually write a header and a data record (for a delete 
> there is only the header record).  If I was doing it again I would 
> look closely at UD triggers.
> 
> Then another process runs almost all the time polling this file and 
> sending the records.  If you wanted to cut down the lag you could do

> both, attempt a direct send and if this fails, cache locally for 
> delayed send.  You would have to be careful with versioning to
ensure 
> a subsequent direct send didn't get clobbered by a delayed cached 
> message update.
> 
> Gotchas.
> If the destination box or the transfer process are down, your local 
> message cache can build up really quick - make sure the stop, clear 
> and recovery process are well understood.  Its bad news when 
> replication to another box takes down your production server.
> 
> Lost updates.  You may need some kind of validation/recovery
process.
> 
> We currently move the messages with scripts at the os level (linux) 
> and its a bit clumsy but I'm in the early stages of looking into
using 
> jms and the apache activemq software.
> 
> HTH
> 
> Adrian
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to