On 4/5/2011 7:08 AM, Angel Luis DomÃnguez wrote:
Thanks every body for clues.
Some answers
1) Both assembler and cobol are sub-programs. They were called directly from
a main program builded in cobol to test. This main programa does measure of
CPU before and after calling.
2) I have made a
That reminds me of an old story where someone tried to test
the performance of programming languages by coding the following
short program in PL/1 and ASSEMBLER (this was on the now historic
Telefunken TR 440 hardware):
SUM = 0;
DO I = 1 TO 10;
SUM = SUM + I;
END;
PUT SKIP LIST (SUM);
Bernd
Since we're telling stories, this reminds me of a time in the early 1970s
when IBM was pushing PL/I.
I was in a technical support centre and a salesman called me in order to
ask - presumably pass on - a question. He said that it was known that PL/I
carried an overhead but What was the
Well,
there is a real thing to discuss behind all those stories, IMHO. Due to
pipelining issues the proper order of instructions in the instruction stream
becomes more important for modern hardware, and that is something that a human
coder cannot do as good as a good optimizing compiler can. So I
On Apr 4, 2011, at 11:04, Tony Harminc wrote:
On 4 April 2011 08:01, Joe Owens joe_ow...@standardlife.com wrote:
One question occurs - must I now use extended save areas, as I am doing
something to the top halves of the GPRs, or will the system take care of
that for me? (There are no amode
My apologies to the list if my point is redundant, as I have not read all the
previous entries for this thread. But perhaps the word overhead has more
than one meaning. What is the overhead to the organization of supporting an
Assembler language programmer in salary and machine time to code
On Tue, Apr 5, 2011 at 2:27 PM, Fred van der Windt
fred.van.der.wi...@mail.ing.nl wrote:
Which means it is unforunate that Angel isn't able to post the code.
Something must be awfully wrong with the assembler code if it is twice as
slow as the Enterprise COBOL code...
Guess it is possible
Just a thought, impossible to prove or disprove without sight of the code.
If the Assembler references (and modifies) data from within 256 bytes of the
instruction on the latest IBM z-Gizmos, does it not take a cache line hit or
some such? And does that not cripple the instruction execution
I have no knowledge about this but as Angel wrote:
I have made a revision of assembler generated by cobol but there are a
lot of LE modules involved.
Could it be that these LE modules put a zIIP engine at work using SRB enclaves ?
That CPU is presumably not showed in the statistics in z/SO ?
On 4/5/2011 4:53 PM, Thomas Berg wrote:
I have no knowledge about this but as Angel wrote:
I have made a revision of assembler generated by cobol but there are a
lot of LE modules involved.
Could it be that these LE modules put a zIIP engine at work using SRB enclaves ?
That CPU is presumably
The problem is not referencing data with the instruction cache line, it is
modifying data. A line (256 bytes) can be in both the I-cache and D-cache
as long as it is not modified.
There can be a savings in the total number of cache lines used if the code
area is aligned on a 256-byte boundary
I'm sure that an optimizing compiler can do an amazing job, but in this case
it was Enterprise COBOL vs handcoded Assembler. I have a hard time believing
that COBOL is faster than assembler in any scenario. Enterprise COBOL is our
'main' programming language and I haven't seen it perform
On Tue, Apr 5, 2011 at 7:53 AM, Thomas Berg thomas.b...@swedbank.se wrote:
I have no knowledge about this but as Angel wrote:
I have made a revision of assembler generated by cobol but there are a
lot of LE modules involved.
Could it be that these LE modules put a zIIP engine at work using
Very good point; the function prologs and epilogs of LE conforming
functions (or procedures) in PL/1 or COBOL programs just modify
next available byte pointers into the preallocated storage area for
all procedures and do no individual GETMAIN/FREEMAIN of their
save areas etc.
But anyway: if
I'm not really disagreeing with any of the comments on this
interesting thread; My $.02 -
- The influence of good application architecture, design, and
implementation is nearly always more important than the choice of
language.
- Good developers usually pick the right tools for the job.
[Comments below]
-Original Message-
From: IBM Mainframe Assembler List [mailto:ASSEMBLER-
l...@listserv.uga.edu] On Behalf Of Kirk Wolf
Sent: Tuesday, April 05, 2011 4:17 PM
To: ASSEMBLER-LIST@LISTSERV.UGA.EDU
Subject: Re: CPU: ASSM vs ENTERPRISE COBOL
I'm not really disagreeing
On 5 April 2011 18:04, Bernd Oppolzer bernd.oppol...@t-online.de wrote:
Very good point; the function prologs and epilogs of LE conforming
functions (or procedures) in PL/1 or COBOL programs just modify
next available byte pointers into the preallocated storage area for
all procedures and do
On Tue, 5 Apr 2011 15:37:47 +0100, Mike Kerford-Byrnes m...@hill-leys.com
wrote:
Of late, I have been instinctively inserting a 256-byte filler between
the
last instruction and the LTORG. In my test environment it makes not a jot
of difference, but on the big boxes.. Well it does no harm.
W. Kevin Kelley noted:
Yes, that is a good idea, although it would be nice if the Assembler
made it easier to put things on 256-byte boundaries.
Have you tried the SECTALGN(256) option, and the extended ORG
boundary-alignment operands? You can write
ORG target,boundary,offset
19 matches
Mail list logo