I'm starting an interesting project for my company.  We have a large, complex 
proprietary legacy database with a considerable suite of backend software that 
maintains it.  The client applications are NT-based and access these flat files 
through Windows Networking or Netware.

Our mission is to turn it into client/server software, supporting multiple tiers of 
servers over TCP/IP.  Also we'd like to have deliverables as we go along, rather than 
making it one huge monolithic project.

What I decided to do is to create a server that encapsulates the legacy database 
(which in the short term will not change) and emulates a simplified PostgreSQL server. 
 This way we get immediate SQL connectivity for new client applications, and the 
ODBC/JDBC drivers are already done.  Initially this is happening under NT but I have 
fantasies of porting to Linux.

I've already got quite a bit done, and have code acting as a "passthrough" to a real 
PG server, and displaying a line of tracing information for each message going back 
and forth.  It's fully multithreaded.  Next we'll start plugging in the legacy 
database and parsing the requests and delivering appropriate responses.  Of course 
this will never be a general-purpose SQL server, as it only needs to deal with a 
particular kind of database.  We'll eventually design some kind of protocol so that 
the server can replicate its data to another server at a lower tier.

I don't know what to call this kind of approach... it makes a lot of sense to me, but 
has anyone done this before?  All comments are welcome.

Thanks!

-- Rod

Reply via email to