Sorry, I didn't have much time to answer last days :-(
I'll try in this post to give more details about how both Mono & .Net
compute stack depth at each instruction.
Mono only supports 100% valid IL code. Mono assumes that the stack
depth at any point can be computed using a single top-down pass. This
is not the case for .Net. Microsoft's implementation needs more than a
single pass for be able to compute the stack depth.
To clarify things, let me give you a simple example
L0 : ldc.i4.3
L1 : br.s L4
L2 : ldc.i4.5
L3 : br.s L5
L4: br.s L2
L5: add
L6 : stloc.0
(This code is not 100% valid, it would not run on Mono, but it works
fine on .Net)
This is how Mono would try to compute stack depth
Before L0: 0
==> Before L1: 1
Before L1:1
==> Before L4 : 1
Before L2 unkown ==> We assume it is 0
==> Before L3 : 1
Before L3 : 1
==> Before L5 : 1
Before L4 : 1
==> Before L2 : 1 ===> ERROR, because we already assumed that Before
L2 is 0 !!!!
Now, this is how .Net would compute stack depth :
Before L0: 0
==> Before L1: 1
Before L1:1
==> Before L4 : 1
Before L4 : 1
==> Before L2 : 1
Before L2 : 1
==> Before L3 : 2
Before L3 : 2
==> Before L5 : 2
Before L5 : 2
==> Before L6 : 1
Before L6 : 1
==> AFTER L6 : 0
I hope this can help you see how it is actually done.
I've tried to make my own corrections to provide a .Net compatible
implementation. It seams to work well :)
I've included a patch file : CodeWriter.patch
The patch includes some other optimizations to avoid the use of
hashtables and int casts : knowing that instruction offsets are not
yet used, I used them as follows:
for (int i = 0; i < instructionCount; i++)
instructions[i].Offset = i;
As you can see, now the Offset property contains the sequential index
of the instruction.
According to my benchmarks, this optimization divides the processing
time by a factor of 2.
PS. Sorry for my potentially bad English
--~--~---------~--~----~------------~-------~--~----~
--
mono-cecil
-~----------~----~----~----~------~----~------~--~---