Jerry and All,
 
We did something similar to Brian.  All of our Basic code was re-designed to
have 6 arguments: PARAM1, PARAM2, PARAM3, PARAM4, RETURN.INFO, and METHOD.
That way we could automatically test any program from any other program by
knowing every program had 6 arguments.  For reports PARAM2 was where all of
the data was passed in as dynamic array, the output uses RETURN.INFO.  Entry
programs passed the updated data in on PARAM2 and passed data out on
RETURN.INFO using an open source format called json.  JSON is not as verbose
as XML and JavaScript and Java natively handle the format by converting it
to an array.  We Basic programmers do like to work with arrays of data.
 
We used METHOD to tell subroutine what it should be doing (i.e.
CreateReport, LoadDefaults, ReadData, WriteData, or BuildGrid).  That way
the programs  became very similar in structure.  We had two or three
programs that supported entry screens were all reduced to a single
programmer.
 
We had to learn how debug the Basic code, the JavaScript, and the HTML code.
You cannot use the debugger on the Web.  We developed what we call a logger,
so instead of using a DEBUG statement or the simple CRT statement to see
what the variables are and where you are in the code, you now call a
subroutine that writes to a log that can be viewed either from Telnet or
from our Eclipse based Basic Editor.  Debugging JavaScript requires a great
open source tool called Firebug that runs on Firefox.
 
We had upwards of 2000 programs running Accounting, CRM, Distribution,
Document Management, Payroll, Transportation, and Warehousing all running as
"green screen" applications  We stripped out all of the screen I/O which
reduced the program to about 1/3 their previous size.  We changed all of the
code from whatever they were to subroutines using our 6 arguments.  Perhaps
the most problematic was all of the reports had to be switch to HTML format.
That process was the most time consuming due the amount of code that needed
to be changed.  However, this gave us time to reduce the amount of reports
we produced.  We consolidated similar reports into a single report with
multiple report options.
 
Some programs took minutes, some took days and some took weeks.  Some of the
code was over 25 years old and required a complete rewrite because of the
GOTO's and calls to various green screen subroutines that no longer existed.
Since we have customers that run in Universe and Unidata we had to develop
the techniques to run the same code on both platforms.  We develop all of
our software on Universe and port to Unidata. Our Eclipse based Installer
comments out the Universe specific code an uncomments out the Unidata
specific code depending on the destination machine.
 
I should note that our code was first written for RedBack running on IIS.  I
decided after a few years that platform was not going in the direction we
need to go so we ported to our own middleware using open source Apache
Tomcat.
 
We accomplished this over a period of a year with four programmers.  We now
have around 400 programs that run all of our applications listed above.  We
are constantly evolving our interface.  We just switched from a home grown
cross-reference to a open source tool that allows us to load 50,000 records
in under a second and it has built in filtering, column sorting and paging.
The Web is truly an amazing place where you can get open source software
that we would have spent weeks to months writing.
 
Regards,
Doug
www.u2logic.com

  _____  

From: [email protected]
[mailto:[email protected]] On Behalf Of jpb-u2ug
Sent: Thursday, June 11, 2009 7:23 AM
To: 'U2 Users List'
Subject: Re: [U2] UniVerse Unit Testing



Doug and Brian,

Could you give me some numbers on how long and how many people (man hours)
it took to do the changes? Approximately how many programs did you have to
convert to the new way and what did you end up with?

 

Jerry Banker

 

From: [email protected]
[mailto:[email protected]] On Behalf Of Perry Taylor
Sent: Thursday, June 11, 2009 7:55 AM
To: U2 Users List
Subject: Re: [U2] UniVerse Unit Testing

 

Brian,

 

You say that you "designed all our server code as subroutines such that all
of our subroutines had one of two calling interfaces".  This would seem to
mean that you built and maintained two different versions of every external
subroutine/function.  Is this correct or am I just missing something?

 

Thanks.

 

Perry

 

  _____  

From: [email protected]
[mailto:[email protected]] On Behalf Of Brian Leach
Sent: Thursday, June 11, 2009 3:19 AM
To: 'U2 Users List'
Subject: Re: [U2] UniVerse Unit Testing

Hi

 

 

At my last company, we spent a lot of effort on building an automated test
rig for our software, because we had to support multiple platforms and all
our code required full regression testing. It may be a slightly different
scenario to yours, since we were primarily building tools, and also this was
complicated by the fact that all of our software was client/server in some
way, and usually involved several languages .. but here is our experience
for what it's worth:

 

 

The bad news is that you really need to design these in from the start.

 

We designed all our server code as subroutines such that all of our
subroutines had one of two calling interfaces, either:

 

Subroutine name(InData, OutData, ErrText)

 

or

 

Subroutine name(Action, InData, OutData, ErrText)

 

That meant that we could generate a test rig that could feed the InData (and
Action) and then test for the OutData and log any ErrText values.

For reports, we would capture the report text and do 'spot checks' on the
expected results.

 

 

We also version stamped our routines, so we were certain we were testing the
right versions, and had build scripts to recompile everything. Nothing left
to manual operation since that opens up the opportunity for something to get
forgotten: there is no point testing stuff to QA and then doing something
different when you come to release! Incidentally, since this was
client/server, these involved VBScript scripts for the client end calling
cutting paragraphs on the server along the line.

 

 

Because Universe code doesn't break down into simple blocks, unless you want
to instrument your code and capture all your file I/O - which is possible -
and test for that, your only sensible option is to unit test at the
subroutine/external function level.

 

 

The good news is that because UniVerse caches subroutines in memory, the
overheads to breaking out code are not as high as on systems that do not. it
also means you end up with a more manageable system, better options for
reuse and if you adopt different client front ends, easier to migrate. You
may also find out that your code mass reduces as you split these out,
because there is less duplication (sorry if I'm stating the obvious here)
and so your testing domain is reduced also.

 

 

If you want clean-room regression testing, I highly recommend Virtual PC is
it will support your OS. We kept clean images of all the platforms we
supported, which was a huge time saver. One nice thing about VPC is that it
supports 'undo disks' which means that you can snapshot the image at a
particular point, and then any changes e.g. brought on by software loads for
testing are physically and transparently stored outside the virtual disk and
you choose at the end whether to commit those changes or not, making it very
easy to go back if that version didn't pass.

 

 

Finallly, having a predictable way to load routines from dev to QA and from
QA to live is a must - so I'll put in a very small [AD] for mvInstaller...

 

Regards

 

Brian

 

 

 

  _____  

From: [email protected]
[mailto:[email protected]] On Behalf Of Perry Taylor
Sent: 10 June 2009 20:33
To: [email protected]
Subject: [U2] UniVerse Unit Testing

The powers that be have been discussing the possibility of going to a unit
test model for QA.  As I understand the concept, portions of code are broken
down into smaller manageable chunks against which a dedicated unit test for
each may be run.  This seems like a good idea in an object oriented world
where methods of object can be easily invoked.  This would seem less
practical in with a procedural language like BASIC.

It feels like we would end up breaking out thousands of lines of code into
external subroutines which could then be run through a dedicated unit test.
This would introduce significant overhead with all the CALLs to hundreds
(thousands) of external subroutines.  Then there are complications such as
variables in named common, etc.

Is anyone out there in MV land employing serious unit testing?  If so, care
to share your experiences, concerns, success stories?

Thanks. 

Perry Taylor 
Zirmed, Inc. 

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to