We do all our insurance math with C programs, and we have very good performance in general, but:
we don't use the decimal extension; all our numbers are ints (long or short) and double (floating point). The results are converted to decimal, when we pass them to the surrounding insurance software, which is written in PL/1 and ASSEMBLER. I once saw the ASSEMBLER code generated by C decimals, and that was really horrible; for every addition and subtraction of decimals, a subroutine was called instead of using the appropriate machine instructions. This is a real nightmare. It is better for PL/1 and also for COBOL, I believe. So this could be part of your problem, if your C code uses decimals. Small functions are no problem; if they are part of the same compile unit, the compiler can inline them, that is, copy the body of the function at the position of the call instead of doing a real call. This is done frequently and should help in reducing the overhead. But: it cannot be done for external functions. So little external functions which are called frequently can be a problem due to the overhead. In this case it is probably better to use macros, if possible. Simple ANSI library function like memcpy, memset, strcpy are often implemented by machine instructions, no LE call required. So there should be no problem in this area, too. Because I am doing a lot of dump analysis, I look frequently at the machine code generated by the various compilers. My impression is, that indeed there were some performance problems in the last 5 years or so, but the newest versions (of C) do a very good job, especially when the new machine instructions for string handling and so on can be used (ARCH option). I believe that IBM development uses C too, so they will demand a good compiler in their own interest. At least, I hope so. Kind regards Bernd Am Mittwoch, 7. September 2005 21:49 schrieben Sie: > Hi John, > > It would be helpful to know a _little_ about what this C/C++ > code does. For example, C I/O is byte-oriented; which is generally > a poor idea on the mainframe. If you C/C++ programs are doing I/O, > and if it was ported unchanged from another system, then that could > be your issue. > > If, on the other hand, the C/C++ program had to do "decimal" > operations - COBOL might be a better choice (although IBM and > Dignus C have extensions to generate Packed instructions.) > > If you C code has a lot of small functions; then the runtime > overhead of a function call can be significant. We like to think > our overhead is smaller than that of the LE runtime, so we could > help there; but you might want to try the XPLINK linkage with the > IBM compiler to see if that helps. The "lots of little functions" > phenomena is particularly common with C++. > > But - as far as computation goes, I can't imagine COBOL can > add two register-sized integers and faster or slower than C can. > > - Dave Rivers - > > John Fly wrote: > > I have been trying to find a reasonable answer to this behavior, but as > > of yet I can not find a definitive resource to tell me why. > > > > If anyone here has suggestions as where to look, I would forever be > > grateful for any assistance you could provide. > > > > I realize that this post is lacking true details, if there is any piece > > of information needed let me know and I will explain as best I can. > > > > Thank you, > > JF ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

