RE: [U2] Replication Between Unidata and MySQL

2006-07-31 Thread Brian Leach
Kevin

I'd go for a 'keep it simple' approach.

First, create three data files on UniData: SYNC_TARGETS, SYNC_ACTIONS and
SYNC_ERRORS.

- The SYNC_TARGETS defines a table or entity to update, including insert and
delete commands (Update=delete+insert) and a list of column positions to
substitute. This may seem overkill but it allows you to format the SQL the
way you want, and possibly specify stored procedures (AFAIK MySQL 5 has
stored procedures but I haven't used them yet) 

- The SYNC_ACTIONS holds the actions, each specifying a target, action type
(update,insert,delete) and a list of fields that match the column positions
specified in the target.

- The SYNC_ERRORS logs errors.

How you populate the action is then up to you: using triggers, virtual
secondary index fields or whatever...

Then, write a simple VB/Delphi/.NET/VBScript app with a UO connection to:

- load up the targets for reference
- select the actions (on a timer loop) sorted by target (so you get
advantages of query preparation and caching - it is worth not making the
timer interval too short)
- prepare insert and delete ADODB commands on each change of target
- substitute the parameters and fire the delete and/or insert commands as
required

It means having a Windows box in the middle, but the route is well known.
Any failure just means the actions accumulate until drained.

I have a working example I can dig out and send you.

Brian



 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Kevin King
 Sent: 29 July 2006 03:12
 To: u2-users@listserver.u2ug.org
 Subject: RE: [U2] Replication Between Unidata and MySQL
 
 From: Of Mike Randall
 Sent: Friday, July 28, 2006 5:46 PM
 ...The trigger could normalize your data (or whatever you 
 needed done) 
 and could be added with no impact to your application.
 
 I wouldn't say no impact per se but I think the trigger 
 idea has merit overall as a minimally invasive change 
 watcher.  But then what?
 What's the best way to get the deltas into the MySQL instance?
 ---
 u2-users mailing list
 u2-users@listserver.u2ug.org
 To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread Stephen O'Neal
Proposing three alternatives:
1) UniData Triggers that call a UniData EDA (Extended Data Architecture)
interface to write to MySQL.  The only thing that has to be developed is
the MySQL interface for EDA.  The burden here is adding triggers for the
UDT database. Hummm
2) UDT Triggers that feed IBM's DataStage to perform the ETL (Extraction of
the data, Transformation of the data (ABC on the UDT system = 123 on
MySQL), and Load). The burden here is adding triggers for the UDT database.
Hummm
3) using the WRITE Subroutine methodology that updates both the UniData
DB and the MySQL via EDA.  The burden here is changing all of the writes to
call subroutines. Hummm

A lot of work no matter what the alternative is.

Rephrasing Tony...
U2 Lab Services would be honored to work with you to provide a complete
solution.

   Steve

   Stephen M. O'Neal
   Lab Services Sales for U2
   IBM SWG Information Management Lab Services
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


Re: [U2] Replication Between Unidata and MySQL

2006-07-30 Thread Symeon Breen
BTW - i have succesfully used the MySQL odbc drivers on unix to write to
MySQL on windows and unix machines using the unidata bci calls from my
databasic - it works very well .

One point to note - there was a bug in version 4 of the MyODBC drivers that
go with MySQL 4. The bug was to do with the destroy function, and there was
a workaround. However you do not have that level of control in the BCI so
sometimes processes would crash out in the disconnect stage. I always use
the MyODBC 2.x driversd which are very stable for most operations on MySQL
2, 3 and 4. Having said all this the bug may well be fixed now anyway ...
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread [EMAIL PROTECTED]
Steve O'Neal wrote:
 Proposing three alternatives:
 1) UniData Triggers that call a UniData EDA (Extended Data
 Architecture) interface to write to MySQL.  The only thing
that has
 to be developed is the MySQL interface for EDA.  The
burden here is
 adding triggers for the UDT database. Hummm

Am I right in thinking EDA is OFS++?  I don't think Kevin
will have access to that in UniData 6.0, and I can't see him
wanting to write and link in an OFS driver for MySQL.  What
about IBM Lab Services - would they write an OFS driver when
the technology is *very* deprecated?

 2) UDT Triggers that feed IBM's DataStage to perform the
ETL
 (Extraction of the data, Transformation of the data (ABC
on the UDT
 system = 123 on MySQL), and Load). The burden here is
adding triggers
 for the UDT database. Hummm

Sledgehammer, nut? ...

 3) using the WRITE Subroutine methodology that updates
both
 the UniData
 DB and the MySQL via EDA.  The burden here is changing all
of
 the writes to call subroutines. Hummm

Again, I'm having trouble with the juxtaposition of the
components EDA and UniData 6.0 in this solution.

 A lot of work no matter what the alternative is.
 
 Rephrasing Tony...
 U2 Lab Services would be honored to work with you to
provide
 a complete solution.

Adding triggers for the appropriate files shouldn't be a big
problem once you've worked out what needs to go across. 
Normalising and massaging that data on it's way across could
be fun though.

I still think the option of pushing the data into MySQL
through ODBC and BCI (not UCI which is the term I mentioned
the other day), or pulling it out from the windows end via
an ODBC schema on your interrim tables make fairly simple,
understandable and testable solution options.

I also suspect IBM Lab Services would help you with these
too, if you wanted them to! ;^)

Cheers,

Ken
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread Stephen O'Neal
EDA is not OFS.  It is a new portion of UDT 7.x.  In U2 Lab Services, we 
try to push new technologies rather than driving the car with our hands on 
the rear view mirror!

Man, stuck both feet in my mouth.  OK.  My Bad...

First foot... UDT 6.x does not have EDA.  It was included in 7.x.  Sorry. 
But it may be worth the  upgrade.   Another possibility with 7.x is XML.

If 6.x, then the ODBC BCI route is possible.  But EDA/XML is much more 
straight forward.

Second foot...  DataStage only works with UniVerse.  But the question of 
changing data on the target machine can be a real problem, even past the 
issue of normalization.

Make no two bones about it - normalization can be horrible!  A good 
portion of the U2 engineering resources was focused on the issues around 
normalization.  I think this is the reason that most of don't agree that 
normalization is real world, and is not Normal! 

My apologies for my bad breath! :-0

Going to brush my teeth now...
   Steve

   Stephen M. O'Neal
   Lab Services Sales for U2
   IBM SWG Information Management Lab Services
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread David Jordan
EDA is not OFS.  It is a new portion of UDT 7.x.  In U2 Lab Services, we 
try to push new technologies rather than driving the car with our hands on 
the rear view mirror!


Just a thought.  The new version of DB2 has a free version to compete with
and convert MYSQL users.  If you converted the MYSQL application to DB2,
then it would be easier to use the EDA tool which already has the driver
written for DB2.

Regards

David Jordan
Managing Consultant
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


Re: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread Clifton Oliver
DataStage Server edition does work with UniData. There is a UniData  
passive stage in the same group with the UniVerse stage for access to  
these files. Now, if you are planning on using the Hashed File stage,  
1) that only works with UniVerse files; 2) is a very bad idea since  
there is no guarantee IBM/Ascential will continue UV file and Hashed  
File compatibility; 3) does not support the unnesting extensions to  
normalize multi-values.


DataStage Enterprise edition (AKA PX or Parallel Extender, AKA  
Torrent) doesn't have either UV or UD stages and thus does not handle  
multi-values.



--

Regards,

Clif

~~~
W. Clifton Oliver, CCP
CLIFTON OLIVER  ASSOCIATES
Tel: +1 619 460 5678Web: www.oliver.com
~~~


On Jul 30, 2006, at 6:45 PM, Stephen O'Neal wrote:


Second foot...  DataStage only works with UniVerse.

---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between UniData and MySQL [ad]

2006-07-30 Thread phil walker
If someone is looking at MySql I hardly think they would contemplate
buying DataStageat least the last time I saw any pricing...might be
different now that IBM own it. But I suspect it financed quite a few
executives/shareholders retirements ;-). Only 1 or 2 sites left in this
part of the world, too expensive for the market size.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Clifton Oliver
Sent: Monday, 31 July 2006 3:01 p.m.
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] Replication Between UniData and MySQL [ad]

DataStage Server edition does work with UniData. There is a UniData
passive stage in the same group with the UniVerse stage for access to
these files. Now, if you are planning on using the Hashed File stage,
1) that only works with UniVerse files; 2) is a very bad idea since
there is no guarantee IBM/Ascential will continue UV file and Hashed
File compatibility; 3) does not support the unnesting extensions to
normalize multi-values.

DataStage Enterprise edition (AKA PX or Parallel Extender, AKA
Torrent) doesn't have either UV or UD stages and thus does not handle
multi-values.


-- 

Regards,

Clif

~~~
W. Clifton Oliver, CCP
CLIFTON OLIVER  ASSOCIATES
Tel: +1 619 460 5678Web: www.oliver.com
~~~


On Jul 30, 2006, at 6:45 PM, Stephen O'Neal wrote:

 Second foot...  DataStage only works with UniVerse.
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-29 Thread Ken Wallis
[EMAIL PROTECTED] wrote:
 Good stuff Adrian.  I've pretty much decided on Unidata triggers to
 figure out what changed and write to a queue file and then have some
 program pulling from that queue to flush to MySQL.  But I was hoping
 that I could do a lot of this in Unidata and I'm fearing I'm gonna
 have to write something in AIX that pushes to MySQL.  Not that it's
 all that difficult, but damn I've been spoiled by Unidata.

Kevin,

As Adrian says, one of the issues you need to deal with is that while you'd
like to pump your changes straight over from UniData to mySQL from your
triggers in order to miminise the latency between the systems, you need to
deal with the possibility of the mySQL database being unavailable, or the
network being down.

Adrian has mentioned MQ (or Websphere, or whatever IBM are calling it
today).  If that were an option, then it would give you guaranteed delivery
and your triggers could just offload the data with no worries about interrim
storage.

Otherwise, my experience is that the best thing to do is to use triggers to
log the change locally in another UniData file, or even to normalise it into
a number of UniData files, and then have a separate process constantly
running, trying to move data across from UniData into mySQL.  Such a process
could either be a UniData phantom using   UCI and some sort of Unix ODBC
bridge to push the data into mySQL, or if you create some normalised UniData
tables to store your deltas in the interrim then it could also be a Windows
based process reading out of UniData via ODBC and updating mySQL.

Cheers,

Ken

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Adrian
 Merrall

 I have been assigned a unique project and have been given some pretty
 stringent requirements.  Basically the project involves a subset
 replication of a Unidata database into MySQL.  As certain records
 change in Unidata (6.0) that record is to be flushed to a separate
 server running MySQL.  Off-hours batch updates are not an option at
 this point, inconsistency between systems is intended to be momentary
 at best.

 This sort of stuff keeps life interesting.


 I can handle the conversion and flattening of the data; that's
 certainly no deal breaker but keeping the MySQL server updated on a
 near-realtime basis has me a bit freaked out.  Has anyone handled
 this kind of thing?  Is there a connector to MySQL that can be
 addressed from within Unidata?  I could certainly do something with
 ODBC I would figure but with Unidata running on AIX I'm not sure how
 exactly I'd move the data to the MySQL server (running on a Win box)
 directly.

 Other options off the top of my head include

 * ...using a http server to update the MySQL instance and using the
 callHttp interface or...

 If your webserver is down you then need to cache locally.

 * ...writing a TCP listener to do the updating of the MySQL instance

 and using Unidata sockets to move the data.

 Similar to the above - if the listener is down you loose your data.


 However, these will necessitate a bit of code that I'd prefer to
 avoid.  What would you do?

 We use something similar to triggers.  Instead of writing directly to
 a file, our file writes are via a subroutine.  This subroutine writes
 to the file and writes what we call replication records to a another
 file.  We actually write a header and a data record (for a delete
 there is only the header record).  If I was doing it again I would
 look closely at UD triggers.

 Then another process runs almost all the time polling this file and
 sending the records.  If you wanted to cut down the lag you could do
 both, attempt a direct send and if this fails, cache locally for
 delayed send.  You would have to be careful with versioning to ensure
 a subsequent direct send didn't get clobbered by a delayed cached
 message update.

 Gotchas.
 If the destination box or the transfer process are down, your local
 message cache can build up really quick - make sure the stop, clear
 and recovery process are well understood.  Its bad news when
 replication to another box takes down your production server.

 Lost updates.  You may need some kind of validation/recovery process.

 We currently move the messages with scripts at the os level (linux)
 and its a bit clumsy but I'm in the early stages of looking into using
 jms and the apache activemq software.
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL [ad]

2006-07-29 Thread Kevin King
Tony, I wish I had some say about architecture but at this point
everything is being dictated to me, language, database, everything BUT
transport because - I fear - the people making the mandates are more
clueless than the national average.  Definitely something to think
about if I can sneak it in under the radar.  Let talk inside of the
next couple of weeks.

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com
 
** Check out scheduled Connect! training courses at
http://www.PrecisOnline.com/train.html.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tony Gravagno
Sent: Friday, July 28, 2006 8:33 PM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Replication Between Unidata and MySQL [ad]

Kevin, you can use triggers for the first part as Adrian suggests.  As
always I'll recommend mv.NET to do the second part.  When you put your
data into a queue file, you can simultaneously log an action item into
a queue for mv.NET.  This will tell a new external routine what to
pick up from the queue file and what to do with it.  In this case,
read data from the queue file, update MySQL, and on confirmed update
remove the items from the UD queues - you don't need to worry about
lost updates.

Yes, some code is required, but such is the price we pay for
sophisticated data manipulation between environments.

The MVExec freeware on my website requires mv.NET to communicate with
the DBMS but here is how it can be used in this case:
- You have a program running over Windows in any language of your
choosing, Perl, PHP, VB, Java, etc.
- You use MVExec to query for data and pull a single record over if
there is anything scheduled to go to MySQL.
- You query MySQL and post the update.
- You use MVExec again to remove the trigger item.
Loop as required.
This process can be streamlined to pull the entire queue from unidata
in the form of a large SQL INSERT or UPDATE query.  Then all you need
to do is execute the query.  On success just delete the UD items.
Similarly you can loop on the server data, build one long query on the
windows side, then just execute it.

As you can see, there are a number of ways to implement this.  The
stopgap in the current thinking is the idea that Unidata needs to
communicate with the remote server.  If you leave the work to a middle
tier, which is where the MySQL environment is running anyway, then all
of the problems go away.

Personal comment: In a weird sense I'm getting as tired of
recommending mv.NET as some of you people are probably tired of seeing
the recommendations, and I often just don't jump in to offer related
solutions for just this reason.  But doesn't it tell us something that
there is a consistent answer for so many of these commnications
problems?  Please remember that I came to this software as a user
because I didn't find anything else in our market that answered all
the questions.  Obviously after all of these years people are still
asking all of the same questions!
So after investigating this software and becoming comfortable with its
depth I decided to sell it and related services.  I did not just
jump into this market as a vendor, and my goal is not to just sell
software.  I try to share solutions that I've found to common problems
and I hope some people here will benefit.

Kevin, I'd be honored to work with you to make this happen.

Tony
TG@ removethisNebula-RnD.com

Kevin King wrote:
 Good stuff Adrian.  I've pretty much decided on Unidata triggers to 
 figure out what changed and write to a queue file and then have some

 program pulling from that queue to flush to MySQL.  But I was hoping

 that I could do a lot of this in Unidata and I'm fearing I'm gonna 
 have to write something in AIX that pushes to MySQL.  Not that it's 
 all that difficult, but damn I've been spoiled by Unidata.


Adrian wrote:
 However, these will necessitate a bit of code that I'd prefer to 
 avoid.  What would you do?
 
 We use something similar to triggers.  Instead of writing directly
to 
 a file, our file writes are via a subroutine.  This subroutine
writes 
 to the file and writes what we call replication records to a another

 file.  We actually write a header and a data record (for a delete 
 there is only the header record).  If I was doing it again I would 
 look closely at UD triggers.
 
 Then another process runs almost all the time polling this file and 
 sending the records.  If you wanted to cut down the lag you could do

 both, attempt a direct send and if this fails, cache locally for 
 delayed send.  You would have to be careful with versioning to
ensure 
 a subsequent direct send didn't get clobbered by a delayed cached 
 message update.
 
 Gotchas.
 If the destination box or the transfer process are down, your local 
 message cache can build up really quick - make sure the stop, clear 
 and recovery process are well understood.  Its bad news when 
 replication to another box takes down your production server.
 
 Lost updates.  You

RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Dan Fitzgerald
Could you use transaction logging  process the log file as soon as it 
fills? You'd have to make very small log files in order to keep latency at a 
minimum, but even then you might be looking at lag times on the order of 1 
or 2 dozen minutes. Perhaps throttle that a bit by artificially filling the 
log by writing a lot of zeroes  to a file (created for that purpose  
logged) after x seconds, then ignore those zeroes at conversion...




From: Kevin King [EMAIL PROTECTED]
Reply-To: u2-users@listserver.u2ug.org
To: u2-users@listserver.u2ug.org
Subject: [U2] Replication Between Unidata and MySQL
Date: Fri, 28 Jul 2006 14:30:56 -0700

I have been assigned a unique project and have been given some pretty
stringent requirements.  Basically the project involves a subset
replication of a Unidata database into MySQL.  As certain records
change in Unidata (6.0) that record is to be flushed to a separate
server running MySQL.  Off-hours batch updates are not an option at
this point, inconsistency between systems is intended to be momentary
at best.

I can handle the conversion and flattening of the data; that's
certainly no deal breaker but keeping the MySQL server updated on a
near-realtime basis has me a bit freaked out.  Has anyone handled this
kind of thing?  Is there a connector to MySQL that can be addressed
from within Unidata?  I could certainly do something with ODBC I would
figure but with Unidata running on AIX I'm not sure how exactly I'd
move the data to the MySQL server (running on a Win box) directly.

Other options off the top of my head include

* ...using a http server to update the MySQL instance and using the
callHttp interface or...
* ...writing a TCP listener to do the updating of the MySQL instance
and using Unidata sockets to move the data.

However, these will necessitate a bit of code that I'd prefer to
avoid.  What would you do?

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com http://www.precisonline.com/

** Check out scheduled Connect! training courses at
http://www.precisonline.com/train.html
http://www.PrecisOnline.com/train.html.
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Kevin King
Desired inconsistency is a second or less, the time it takes to push
bits on a wire.  Interesting ideas, of course, but not likely.

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com
 
** Check out scheduled Connect! training courses at
http://www.PrecisOnline.com/train.html.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dan
Fitzgerald
Sent: Friday, July 28, 2006 3:09 PM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Replication Between Unidata and MySQL

Could you use transaction logging  process the log file as soon as it
fills? You'd have to make very small log files in order to keep
latency at a minimum, but even then you might be looking at lag times
on the order of 1 or 2 dozen minutes. Perhaps throttle that a bit by
artificially filling the log by writing a lot of zeroes  to a file
(created for that purpose 
logged) after x seconds, then ignore those zeroes at conversion...


From: Kevin King [EMAIL PROTECTED]
Reply-To: u2-users@listserver.u2ug.org
To: u2-users@listserver.u2ug.org
Subject: [U2] Replication Between Unidata and MySQL
Date: Fri, 28 Jul 2006 14:30:56 -0700

I have been assigned a unique project and have been given some pretty

stringent requirements.  Basically the project involves a subset 
replication of a Unidata database into MySQL.  As certain records 
change in Unidata (6.0) that record is to be flushed to a separate 
server running MySQL.  Off-hours batch updates are not an option at 
this point, inconsistency between systems is intended to be momentary

at best.

I can handle the conversion and flattening of the data; that's 
certainly no deal breaker but keeping the MySQL server updated on a 
near-realtime basis has me a bit freaked out.  Has anyone handled
this 
kind of thing?  Is there a connector to MySQL that can be addressed 
from within Unidata?  I could certainly do something with ODBC I
would 
figure but with Unidata running on AIX I'm not sure how exactly I'd 
move the data to the MySQL server (running on a Win box) directly.

Other options off the top of my head include

* ...using a http server to update the MySQL instance and using the 
callHttp interface or...
* ...writing a TCP listener to do the updating of the MySQL instance 
and using Unidata sockets to move the data.

However, these will necessitate a bit of code that I'd prefer to
avoid.  
What would you do?

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com http://www.precisonline.com/

** Check out scheduled Connect! training courses at 
http://www.precisonline.com/train.html
http://www.PrecisOnline.com/train.html.
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


Re: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Adrian Merrall

Kevin,


I have been assigned a unique project and have been given some pretty
stringent requirements.  Basically the project involves a subset
replication of a Unidata database into MySQL.  As certain records
change in Unidata (6.0) that record is to be flushed to a separate
server running MySQL.  Off-hours batch updates are not an option at
this point, inconsistency between systems is intended to be momentary
at best.


This sort of stuff keeps life interesting.



I can handle the conversion and flattening of the data; that's
certainly no deal breaker but keeping the MySQL server updated on a
near-realtime basis has me a bit freaked out.  Has anyone handled this
kind of thing?  Is there a connector to MySQL that can be addressed
from within Unidata?  I could certainly do something with ODBC I would
figure but with Unidata running on AIX I'm not sure how exactly I'd
move the data to the MySQL server (running on a Win box) directly.

Other options off the top of my head include

* ...using a http server to update the MySQL instance and using the
callHttp interface or...


If your webserver is down you then need to cache locally.


* ...writing a TCP listener to do the updating of the MySQL instance
and using Unidata sockets to move the data.


Similar to the above - if the listener is down you loose your data.



However, these will necessitate a bit of code that I'd prefer to
avoid.  What would you do?


We use something similar to triggers.  Instead of writing directly to
a file, our file writes are via a subroutine.  This subroutine writes
to the file and writes what we call replication records to a another
file.  We actually write a header and a data record (for a delete
there is only the header record).  If I was doing it again I would
look closely at UD triggers.

Then another process runs almost all the time polling this file and
sending the records.  If you wanted to cut down the lag you could do
both, attempt a direct send and if this fails, cache locally for
delayed send.  You would have to be careful with versioning to ensure
a subsequent direct send didn't get clobbered by a delayed cached
message update.

Gotchas.
If the destination box or the transfer process are down, your local
message cache can build up really quick - make sure the stop, clear
and recovery process are well understood.  Its bad news when
replication to another box takes down your production server.

Lost updates.  You may need some kind of validation/recovery process.

We currently move the messages with scripts at the os level (linux)
and its a bit clumsy but I'm in the early stages of looking into using
jms and the apache activemq software.

HTH

Adrian
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


Re: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Adrian Merrall

Kevin,

I forgot to mention - you could look into the IBM MQseries support
added into Unidata but I don't think it was there at 6, you may need
to upgrade for this and hand IBM a lot of $ for the websphere mq
software.

Adrian
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Kevin King
Good stuff Adrian.  I've pretty much decided on Unidata triggers to
figure out what changed and write to a queue file and then have some
program pulling from that queue to flush to MySQL.  But I was hoping
that I could do a lot of this in Unidata and I'm fearing I'm gonna
have to write something in AIX that pushes to MySQL.  Not that it's
all that difficult, but damn I've been spoiled by Unidata.

-Kevin
[EMAIL PROTECTED]
http://www.PrecisOnline.com
 
** Check out scheduled Connect! training courses at
http://www.PrecisOnline.com/train.html.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adrian
Merrall
Sent: Friday, July 28, 2006 3:54 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] Replication Between Unidata and MySQL

Kevin,

 I have been assigned a unique project and have been given some
pretty 
 stringent requirements.  Basically the project involves a subset 
 replication of a Unidata database into MySQL.  As certain records 
 change in Unidata (6.0) that record is to be flushed to a separate 
 server running MySQL.  Off-hours batch updates are not an option at 
 this point, inconsistency between systems is intended to be
momentary 
 at best.

This sort of stuff keeps life interesting.


 I can handle the conversion and flattening of the data; that's 
 certainly no deal breaker but keeping the MySQL server updated on a 
 near-realtime basis has me a bit freaked out.  Has anyone handled
this 
 kind of thing?  Is there a connector to MySQL that can be addressed 
 from within Unidata?  I could certainly do something with ODBC I
would 
 figure but with Unidata running on AIX I'm not sure how exactly I'd 
 move the data to the MySQL server (running on a Win box) directly.

 Other options off the top of my head include

 * ...using a http server to update the MySQL instance and using the 
 callHttp interface or...

If your webserver is down you then need to cache locally.

 * ...writing a TCP listener to do the updating of the MySQL instance

 and using Unidata sockets to move the data.

Similar to the above - if the listener is down you loose your data.


 However, these will necessitate a bit of code that I'd prefer to 
 avoid.  What would you do?

We use something similar to triggers.  Instead of writing directly to
a file, our file writes are via a subroutine.  This subroutine writes
to the file and writes what we call replication records to a another
file.  We actually write a header and a data record (for a delete
there is only the header record).  If I was doing it again I would
look closely at UD triggers.

Then another process runs almost all the time polling this file and
sending the records.  If you wanted to cut down the lag you could do
both, attempt a direct send and if this fails, cache locally for
delayed send.  You would have to be careful with versioning to ensure
a subsequent direct send didn't get clobbered by a delayed cached
message update.

Gotchas.
If the destination box or the transfer process are down, your local
message cache can build up really quick - make sure the stop, clear
and recovery process are well understood.  Its bad news when
replication to another box takes down your production server.

Lost updates.  You may need some kind of validation/recovery process.

We currently move the messages with scripts at the os level (linux)
and its a bit clumsy but I'm in the early stages of looking into using
jms and the apache activemq software.

HTH

Adrian
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Mike Randall
First thing that comes to mind is an update trigger on the Unidata side that
captures every write attempt.  The trigger could compare the before and
after versions of the record and execute a call to a SQL update process that
you come up with.   The trigger could normalize your data (or whatever you
needed done) and could be added with no impact to your application.

Mike 

From: Kevin King [EMAIL PROTECTED]
Reply-To: u2-users@listserver.u2ug.org
To: u2-users@listserver.u2ug.org
Subject: [U2] Replication Between Unidata and MySQL
Date: Fri, 28 Jul 2006 14:30:56 -0700

I have been assigned a unique project and have been given some pretty

stringent requirements.  Basically the project involves a subset 
replication of a Unidata database into MySQL.  As certain records 
change in Unidata (6.0) that record is to be flushed to a separate 
server running MySQL.  Off-hours batch updates are not an option at 
this point, inconsistency between systems is intended to be momentary

at best.

I can handle the conversion and flattening of the data; that's 
certainly no deal breaker but keeping the MySQL server updated on a 
near-realtime basis has me a bit freaked out.  Has anyone handled
this 
kind of thing?  Is there a connector to MySQL that can be addressed 
from within Unidata?  I could certainly do something with ODBC I
would 
figure but with Unidata running on AIX I'm not sure how exactly I'd 
move the data to the MySQL server (running on a Win box) directly.

Other options off the top of my head include

* ...using a http server to update the MySQL instance and using the 
callHttp interface or...
* ...writing a TCP listener to do the updating of the MySQL instance 
and using Unidata sockets to move the data.

However, these will necessitate a bit of code that I'd prefer to
avoid.  
What would you do?
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread phil walker
I would suggest a trigger which updates a log file and that is all it
does.

The log file or a pool of log files, would be better, to allow for
performance,broken files etc could contain standard trigger information
plus environment details/call stack etc, (can be used for auditing then.

Have a background process which filters/transforms data and updates
remove database within a transaction. Flag/delete records from log when
committed to remote database only, otherwise reprocess. I am using
xml/webservices to do this to MS Sql.

Process log sequentially, then you are able to say where process is at
and how far behind it is. Better yo use this disjointed mechanism, as
will not slow down normal user processing to much as trigger is just
doing one more write. Not waiting for a commit to a remote database.
Disjoint process also allows for any part to be down.

Mutliple log files, one current allows for broken files, as if you are
captruing every write the log file will get large very quickly.

Cheers,

Phil.



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mike Randall
Sent: Saturday, 29 July 2006 2:01 p.m.
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Replication Between Unidata and MySQL

First thing that comes to mind is an update trigger on the Unidata side
that captures every write attempt.  The trigger could compare the before
and after versions of the record and execute a call to a SQL update
process that
you come up with.   The trigger could normalize your data (or whatever
you
needed done) and could be added with no impact to your application.

Mike 

From: Kevin King [EMAIL PROTECTED]
Reply-To: u2-users@listserver.u2ug.org
To: u2-users@listserver.u2ug.org
Subject: [U2] Replication Between Unidata and MySQL
Date: Fri, 28 Jul 2006 14:30:56 -0700

I have been assigned a unique project and have been given some pretty

stringent requirements.  Basically the project involves a subset 
replication of a Unidata database into MySQL.  As certain records 
change in Unidata (6.0) that record is to be flushed to a separate 
server running MySQL.  Off-hours batch updates are not an option at 
this point, inconsistency between systems is intended to be momentary

at best.

I can handle the conversion and flattening of the data; that's 
certainly no deal breaker but keeping the MySQL server updated on a 
near-realtime basis has me a bit freaked out.  Has anyone handled
this 
kind of thing?  Is there a connector to MySQL that can be addressed 
from within Unidata?  I could certainly do something with ODBC I
would 
figure but with Unidata running on AIX I'm not sure how exactly I'd 
move the data to the MySQL server (running on a Win box) directly.

Other options off the top of my head include

* ...using a http server to update the MySQL instance and using the 
callHttp interface or...
* ...writing a TCP listener to do the updating of the MySQL instance 
and using Unidata sockets to move the data.

However, these will necessitate a bit of code that I'd prefer to
avoid.  
What would you do?
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL

2006-07-28 Thread Kevin King
From: Of Mike Randall
Sent: Friday, July 28, 2006 5:46 PM
...The trigger could normalize your data (or whatever you
needed done) and could be added with no impact to your application.

I wouldn't say no impact per se but I think the trigger idea has
merit overall as a minimally invasive change watcher.  But then what?
What's the best way to get the deltas into the MySQL instance?
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/


RE: [U2] Replication Between Unidata and MySQL [ad]

2006-07-28 Thread Tony Gravagno
Kevin, you can use triggers for the first part as Adrian suggests.  As
always I'll recommend mv.NET to do the second part.  When you put your data
into a queue file, you can simultaneously log an action item into a queue
for mv.NET.  This will tell a new external routine what to pick up from the
queue file and what to do with it.  In this case, read data from the queue
file, update MySQL, and on confirmed update remove the items from the UD
queues - you don't need to worry about lost updates.

Yes, some code is required, but such is the price we pay for sophisticated
data manipulation between environments.

The MVExec freeware on my website requires mv.NET to communicate with the
DBMS but here is how it can be used in this case:
- You have a program running over Windows in any language of your choosing,
Perl, PHP, VB, Java, etc.
- You use MVExec to query for data and pull a single record over if there
is anything scheduled to go to MySQL.
- You query MySQL and post the update.
- You use MVExec again to remove the trigger item.
Loop as required.
This process can be streamlined to pull the entire queue from unidata in
the form of a large SQL INSERT or UPDATE query.  Then all you need to do is
execute the query.  On success just delete the UD items.
Similarly you can loop on the server data, build one long query on the
windows side, then just execute it.

As you can see, there are a number of ways to implement this.  The stopgap
in the current thinking is the idea that Unidata needs to communicate with
the remote server.  If you leave the work to a middle tier, which is where
the MySQL environment is running anyway, then all of the problems go away.

Personal comment: In a weird sense I'm getting as tired of recommending
mv.NET as some of you people are probably tired of seeing the
recommendations, and I often just don't jump in to offer related solutions
for just this reason.  But doesn't it tell us something that there is a
consistent answer for so many of these commnications problems?  Please
remember that I came to this software as a user because I didn't find
anything else in our market that answered all the questions.  Obviously
after all of these years people are still asking all of the same questions!
So after investigating this software and becoming comfortable with its
depth I decided to sell it and related services.  I did not just jump
into this market as a vendor, and my goal is not to just sell software.  I
try to share solutions that I've found to common problems and I hope some
people here will benefit.

Kevin, I'd be honored to work with you to make this happen.

Tony
TG@ removethisNebula-RnD.com

Kevin King wrote:
 Good stuff Adrian.  I've pretty much decided on Unidata triggers to
 figure out what changed and write to a queue file and then have some
 program pulling from that queue to flush to MySQL.  But I was hoping
 that I could do a lot of this in Unidata and I'm fearing I'm gonna
 have to write something in AIX that pushes to MySQL.  Not that it's
 all that difficult, but damn I've been spoiled by Unidata.


Adrian wrote:
 However, these will necessitate a bit of code that I'd prefer to
 avoid.  What would you do?
 
 We use something similar to triggers.  Instead of writing directly to
 a file, our file writes are via a subroutine.  This subroutine writes
 to the file and writes what we call replication records to a another
 file.  We actually write a header and a data record (for a delete
 there is only the header record).  If I was doing it again I would
 look closely at UD triggers.
 
 Then another process runs almost all the time polling this file and
 sending the records.  If you wanted to cut down the lag you could do
 both, attempt a direct send and if this fails, cache locally for
 delayed send.  You would have to be careful with versioning to ensure
 a subsequent direct send didn't get clobbered by a delayed cached
 message update.
 
 Gotchas.
 If the destination box or the transfer process are down, your local
 message cache can build up really quick - make sure the stop, clear
 and recovery process are well understood.  Its bad news when
 replication to another box takes down your production server.
 
 Lost updates.  You may need some kind of validation/recovery process.
 
 We currently move the messages with scripts at the os level (linux)
 and its a bit clumsy but I'm in the early stages of looking into using
 jms and the apache activemq software.
 
 HTH
 
 Adrian
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/