Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-28 Thread Florian Klaempfl
Am 23.12.2016 um 23:30 schrieb Bernd Oppolzer:
> 
> Hello FPC list,
> 
> I would like to inform you, that I ported an improved version
> of the Stanford Pascal compiler (a descendant of the Wirth P4
> compiler) to Windows, OS/2 and Linux.
> 

Glad to hear that FPC is not the only oss/free pascal compiler being
developed :)

___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-27 Thread Mark Morgan Lloyd

On 26/12/16 19:00, Bernd Oppolzer wrote:

A thought without its being a direct comment on any part of what you've 
written. Programming texts from Wirth onwards tended to muddle pointers 
and dynamic memory allocation together, so in effect assumed that 
pointers would /only/ come out of the heap manager hence could be 
checked for consistency in that context. Early Pascal compilers neither 
had an addressof() function nor expected that the underlying OS would 
return a pointer as a syscall result.


When teaching/supporting I've found it useful to make the opposite 
assumption, i.e. starting off with a hypothetical OS call that returned 
a pointer and showing students what they could do with it, and only 
later introducing trees etc. on the heap.


Apart from that, I've certainly seen mark/release being the dominant 
mechanism in 1980s compilers, and even Turbo Pascal 3.0 (circa 1986) 
supports it with the explicit


"NOTICE that Dispose and Mark/Release use entirely different approaches 
to heap management - and never shall the twain meet! Any one program 
must use either Dispose or Mark/Release to manage the heap. Mixing them 
will produce unpredictable results."


Turbo Pascal 5.0 (circa 1989, and not long preceding Delphi) appears to 
adopt a complex scheme to handle both of the approaches, and observes


"Mark and Release cannot be used interchangeably with Dispose and 
FreeMem unless certain rules are observed [...] ."


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-26 Thread Bernd Oppolzer


Am 26.12.2016 um 10:31 schrieb Alexander Stohr:



Am 2016-12-25 um 21:42 schrieb Bernd Oppolzer:

Thank you for your feedback.


Thank you for your kind answers.


You're welcome; I'm happy to meet someone who is interested in my work :-)



BTW, I had to remove some sort of self check from the
Stanford Pascal runtime; it checked (before), that pointers only pointed
to the valid range of heap addresses; because the "traditional" heap 
consisted

of one contiguous segment, this was very easy to implement. But I had to
remove this check, because pointers now can point to auto variables,
static variables, "new" alloc areas etc. as well.


It would still work for some applications.  But they would be few.
The vast majority of real use projects probably tend to break those box.
So if it was an option with an enable switch, most would disable it.
Probably for that reason it would never be worth keeping it at all.


The Pascal documents say that they implemented it in a very basic manner,
just to be sufficient for the needs of the compiler; and that future 
implementors

are free to replace the storage management by more sophisticated solutions.
The heap elements allocated by the compiler are - for example - all 
freed, when
a certain block is completely compiled, that is: the internal lists of 
definitions
are freed, because they are no longer needed.  I kept the mark/release 
logic,
because I wanted to keep this logic. The new/mark/release areas are 
completely
different from the alloc/free areas. The whole mark/release area is 
allocated

at startup and cannot be enlarged (4 MB minus stack at the moment, can
be configured). The alloc/free area is allocated on an as-needed base in 
64 k chunks;

limited to ca. 8 to 10 MB due to address range limitations - may grow much
larger, if 31-bit addressing were possible.

The problem, why I had to add this to the Pascal runtime, was: the 
original
Stanford Pascal runtime only had functions new, mark, and release. 
Release
releases all storage which was required since the last mark call - 
but there
is no invidual "free" of areas. But I wanted to port an application 
to Stanford
Pascal which required invidiual allocs and frees (like C 
malloc/free). I decided

to add alloc/free and leave new/mark/release untouched for the moment,
because it is used in the compiler.


So it was created more like a stack allocator. The allocations were 
local to the
functions or context they were allocated in and at some waypoint 
(exit/return)
all of them were released. Thats not a generic universal heap design 
but rather
goes to the level where gaps of sometimes named but then unused items 
increase over time
and on persistent would need some garbage collection (time loss!) any 
now and then.
But for that design as above growth and shrinkage is determined by the 
code path.

Under some conditions the growth might be very determined
but the shrinkage is always very fixed whilst beeing much rarer.
(I feel a little similarity to older stack rewinding concepts in C 
exception/resume features.)


I see you did wise to keep those items out of yournew codes for the 
project
whilst keeping it untouched for the moment in the existing codes that 
dont interfere.


Do you see a good chance to use some larger existing code bases and 
test suites for verifying the compiler?
Do you have some heap tracking functionality inside so that e.g. "1234 
heap Bytes lost" it printed at exit?

Is there a some debug option for stack max size tracking?
Is there something for stack/heap object out of bounds writes/access? 
(thinking of magic word fences in between, and of heap sanity checking)


The first test for the compiler is always the compiler itself (first and 
second pass);
it should compile itself again and again and yield the same results. 
Then I collected
over the time some 30 testcases, which cover different areas; 
esspecially the new
features that I added. I am a big fan of test driven development, so I 
often added new
statements and features which first lead to a compiler error, and then I 
implemented
them, until they worked as the should. Now these test cases are kept for 
regression testing.


For the LE heap management, which is a sort of addendum to the Pascal 
runtime

(the compiler doesn't need it):

there are functions that give statistics on heap usage at the end of the 
process or

at any point in time in between

there are functions that check the heap for integrity (same checks as 
suggested by
the IBM paper - the LE heap management technology is a product of IBM 
Watson

Research Center, see the presentation link some days ago)

I wrote a program to check for memory leaks (in ANSI C), which works 
with the
"normal" LE heap management (as provided by IBM); you call this program 
twice
at different points in time, and the program tells you, which areas have 
been
allocated and not freed in the meantime; this was very helpful when 
finding memory
leaks for my customers (insurance 

Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-25 Thread Mark Morgan Lloyd

On 25/12/16 10:00, Alexander Stohr wrote:

Am 2016-12-24 um 15:20 schrieb Bernd Oppolzer:



The first answer for such operators often is: use the right container
for the value you are in need and avoid all those operators.


Bear in mind that this is very old code that Bernd is maintaining, and 
long predates any concept of objects or of containers other than Pascal 
records.



Is the your lower end of those module touching on a system API level for
that?
(e.g. the Linux ABI, OS/2 system peronality, WIN32 api - rather than
malloc()/free() as a C standard library provides it)


Again, bear in mind that one of his major targets is a late-70s vintage 
IBM CMS, which predates most APIs as we know them (in fact, long 
predates the terms API and ABI themselves :-)


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-25 Thread Alexander Stohr



Am 2016-12-24 um 15:20 schrieb Bernd Oppolzer:

Am 24.12.2016 um 14:21 schrieb Mark Morgan Lloyd:

On 24/12/16 12:30, Bernd Oppolzer wrote:


Regarding ^:

"my" compiler supports different representations for the pointer 
symbol,

and for other
critical symbols, too:

^  @  ->   for the pointer symbol (I use -> most of the time)
[   (.   (/   for arrays
{   (*  /*   for comments  ("comment" is supported, too, for historic
reasons)


/* was the form used in the first edition of Wirth's description of 
Pascal (might have been before Jensen was there to help out).


I converted all relevant sources (including the compiler itself) to ->,
so this would be ok for me.

This is indeed a very well known C style.

PTRADD, PTRDIFF and so on ...
This is not C style but with a minor macro helper something like this is 
quickly added to a typical C program.
The difference between two pointers is defined as the number of elements 
of the pointer type.
The addition or sum of two pointers is missing a bit of a straight 
forward logic for me.
Advanced coding style recommendations (e.g. MISRA standards) would tell 
you not to use such pointer math
because such constructs have a higher chance to sometimes mislead the 
performing programmer and thus would more likely lead up to codes with 
functional bugs.
The first answer for such operators often is: use the right container 
for the value you are in need and avoid all those operators.
And the second answer is: If you are really in the true rare need (e.g. 
for system programming, ring buffers, atomic operations, code sizes + 
register savings + extreme efficiency) then keep these codes and data 
very isolted, very well documented and finally deeply tested for all 
operating conditions.
Maybe pandora's box, but I need this to do some of the more systems 
related
work. For example: I rewrote the LE storage management in Pascal and 
made it

available to the Stanford Pascal runtime (new functions ALLOC and FREE);
this works perfectly with Hercules and VM; it still has to be tested
with Windows - Linux - OS/2.
Is the your lower end of those module touching on a system API level for 
that?
(e.g. the Linux ABI, OS/2 system peronality, WIN32 api - rather than 
malloc()/free() as a C standard library provides it)
I added /* to the list of allowable symbols for comments, because it's 
what I knew

from PL/1 and C. Comments have to be terminated by the same symbol which
started them, and: they may be nested (as with FPC).
From a C program portability view I would futher "tick" for the support 
of  the "//" comment operator as a single sided one.
On the other side, even for ages there is the option of pascal-2-c 
converters (p2c) that do the other way around, but would have some 
smaller problem if such a token was suddenly added. If sources of the 
tool are available then someone that is in need for support could solve 
it easily.


just my 2 euro cents.

regards and good christmas times.
-Alex.
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-24 Thread Bernd Oppolzer


Am 24.12.2016 um 12:50 schrieb Mark Morgan Lloyd:

On 24/12/16 11:30, Bernd Oppolzer wrote:


chars in
the (character) P-Code file had to be converted to character 
constants; all

places where character A - for example - was represented as numeric 193
(which is EBCDIC 'A') had to be found and corrected. Even such places 
where
the reference to 193 was not recognized at first sight, that is: 
offsets in

branch tables and bit strings representing sets.


I think you've made creditable progress in a difficult area. What are 
you doing about PRED() and SUCC() as applied to CHARs?


Anybody with any sort of interest in mainframes is going to have to 
consider EBCDIC for quite some while, but unfortunately there are 
still people who insist that it's flawless. One of our wiki pages has 
somebody confirm that EBCDIC has ^, but he then goes on to admit that 
it's not in all codepages...



Thank you.

I think about "portability" in a certain way; to make it clear:

of course it is possible to write programs that are not portable
using my "new" compiler.

You are mentioning SUCC and PRED with CHAR; that is a very cood example.
These functions are implemented based on the underlying character set;
that means, that SUCC('R') is not 'S' on EBCDIC, because there is a gap 
between

'R' and 'S in the EBCDIC codepage (six other characters between 'R' and 'S'
which are not alphabetic).

This is a portability problem which appears at the source code level (!)
and cannot be healed by the compiler. It is the same with the C language,
and the sensible programmer has to deal with this, if he or she wants to
have his or her programs really portable.

My problems with the Stanford compiler were different; if the compiler 
generates
code which will not run on a platform using a different code page, 
because it generates
branch tables when implementing case statements that imply a certain 
code page,
this is a big problem and has to be fixed. The compiler implementor has 
to find a
representation (in the P-Code, in this case), which will work on every 
platform, that is:
which is independent of the code base - and does not prevent the 
optimizations

done by the later stages of the compiler.

Same goes for the bit string representation of sets of char; in this 
case, the construction

of the bit string has to be deferred until the code page can be determined
(P-Code translation or interpretation time). On Windows etc., the P-Code 
interpreter
"translates" the P-Code to an internal representation on startup, and 
that's the
time when the "portable" representation of set constants (of char) are 
translated

to the bit string representation. See my web site for details.

Regarding ^:

"my" compiler supports different representations for the pointer symbol, 
and for other

critical symbols, too:

^  @  ->   for the pointer symbol (I use -> most of the time)
[   (.   (/   for arrays
{   (*  /*   for comments  ("comment" is supported, too, for historic 
reasons)


no problem with EBCDIC. I do the editing on Windows most of the time and 
move the

files to Hercules using the socket reader.

BTW: you find the compiler sources and a history of the extensions that 
I applied in the last

months (or years) on the web site, too.

Kind regards

Bernd

___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


Re: [fpc-other] Stanford Pascal Compiler successfully ported to Windows, OS/2 and Linux

2016-12-24 Thread Mark Morgan Lloyd

On 24/12/16 11:30, Bernd Oppolzer wrote:

Hello Mark,

on several occasions, I looked what FPC does, when I extended the
Stanford compiler, for example when I added support for direct write
of scalars.

At one time I recall that I decided explicitly to take another direction;
that was when I allowed shorter string constants to be assigned to
longer ones, for example:

var x: array [1 .. 200] of char;

x := 'some string';

IIRC, FPC fills with hex zeroes, but I prefer blanks - the blank
representation
of the target system ... which is different on the target systems; this
should
show to some of the readers here which are not familiar with IBM mainframes
some of the difficulties I had to get the P-Code really portable ... all


I think the issue of padding partially-initialised data structures is 
something that merits wider discussion. Provided of course that we can 
avoid the sort of arcana that Paul/Kerravon is enmired in :-)



chars in
the (character) P-Code file had to be converted to character constants; all
places where character A - for example - was represented as numeric 193
(which is EBCDIC 'A') had to be found and corrected. Even such places where
the reference to 193 was not recognized at first sight, that is: offsets in
branch tables and bit strings representing sets.


I think you've made creditable progress in a difficult area. What are 
you doing about PRED() and SUCC() as applied to CHARs?


Anybody with any sort of interest in mainframes is going to have to 
consider EBCDIC for quite some while, but unfortunately there are still 
people who insist that it's flawless. One of our wiki pages has somebody 
confirm that EBCDIC has ^, but he then goes on to admit that it's not in 
all codepages...


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other