On 8/28/07, Darx Kies <[EMAIL PROTECTED]> wrote:
>
> Scott Balmos wrote:
> > Chad Z. Hower aka Kudzu wrote:
> >
> >> I'd like to make a few proposals.
> >>
> >> 1) We establish a loose but workable team structure which includes
> conflict
> >> resolution.
> >>
> >>
> >>
> > Someone will probably say this already exists. I don't know. My major
> > recommendation right now is establishment of a build master, who owns
> > the trunk. All changes should be done in local repos. When something
> > should be promoted to the trunk, it is debated here, goes through a code
> > check, and the build master normalizes the code to a coding standard
> > before committing it to the trunk. Trying to get rid of William breaking
> > the kernel again (heh), or our infamous problems of things compiling
> > under Mono but not VStudio or vice versa.


Though I am not personally proficient with branching and then re-merging for
the purpose of trunk preservation, I'm a big fan of moving in that
direction.

I tried setting up a continuous integration system, (CruiseControl.NET), for
the project - and CruiseControl.NET from what I understand is the better of
the open-source options - but it was a pain in the arse to keep it
configured properly, especially with the build scripts changing all the
time.

If we think we are worth it, we should check into getting a free license,
available to open source projects, of a commercial continuous integration
suite, such as Jira.

> Between that, and API / architecture docs before you start coding a
> > major piece. Everyone else can tell you how much I've grumbled lately
> > about not being able to get my VM code going because the trunk repo got
> > reworked, the base kernel's in flux, and the AOT is still buggy. :)
> >
> And now guess why I was begging for test cases and test cases and I
> still do.


These are the things that we ideally need to test:
-1- test kernel API functionality
-2- test AOT output validity
-3- test AOT API functionality
(And ideally, the testing process needs to be fully automated, and fully
attempted and reported after changes to the trunk.)

#1 can be tested, as we've discussed, by conditionally compiling test cases
into the kernel. We have not done any work in this area yet.

As far as #2, we've done this thus far by compiling a tests kernel that runs
the set of test methods, and then printing if one fails.
This can be automated by writing to a serial port on the virtual machine
which is routed to a named pipe or log file on the host machine, and then
processed. The problem with what we have been doing is, some bugs only arise
when compiling multiple tests. AOTing test A actually makes test B either
fail to AOT or fail to succeed. So we may need a more intelligent testing
process that attempts to resolve failed-to-AOT-tests attempts by removing
test cases. *shrug*.
It has also been suggested that we need to "hand verify" our machine code
output, so perhaps a way to compare x86 code output from our AOT with output
from .NET or Mono could be possible.

As far as #3 (testing the AOT's API functionality) - this is more of a
sanity issue, might not need to be done off-the-bat, and it is certainly
agreeable that alot of it would be redundant with testing AOT output.

Furthermore, regarding all this testing, is that the amount of testing done
will compound with every architecture we eventually support. All the more
reason to establish as much automation as possible.

>
> >> 2) We establish a highly documented, well debated kernel design doc.
> Then
> >> proceed to make the kernel rock solid before putting too much effort
> into
> >> anything beyond the command line.
> >>


I firmly agree with this. The problem is, most of us aren't organized enough
or well-practiced enough to draw up diagrams of an entire kernel. But I
agree that it does need to be done, it does need to be debated. But once
settled, it should not be changed for a long time. Projects that keep
re-hashing their core design, before they get "off the ground", never get
anywhere. They fall apart. We will have to accept that not everyone will be
happy with the final design.

Thats why forks were invented.

>>
> > I've been kicking for this for awhile. Currently it's more a code-first,
> > write-architecture-later. The largest part of the system, the compiler,
> > is AOT-only, and only one person knows its architecture. I've been very
> > vocal about that. Don't know if Chriss has gotten anywhere with
> > documenting the architecture.
> >
> Like I already said. The AOT itself is still subject to change, and
> right now I am rewriting parts of it.
> The code is partly documented and it does even have some inline examples
> to make it more understandable
> and I everyone that wants to understand it better just write me on IRC.
>
> > Personally, I would halt all current kernel-related development, go back
> > and tear back apart the compiler and get that completely rearchitected,
> > redocumented, and retested.
> That is what actually is happening right now, even though I had to
> suspend the work on it for a while.... again.


But in essence, I have to agree. No more kernel work, yet. The AOT is the
heart of the project. We have to keep it beating. And we have to get that
blood pressure up.

I think if we setup a comprehensive test automation process, we can polish
the AOT with ease, and then proceed with the kernel, and he monstrous task
of at least *trying* to get everyone to agree on its design.


> Start with reading in the source bytecode at
> > a method level from multiple sources, whether it's a byte stream, file,
> > whatever. This ensures an easy separation of code into an embeddable JIT
> > engine, with the AOT system simply being a file-feeder shell. Convert to
> > unoptimized IR code, then register transfer code. Spit out direct,
> > unoptimized machine code from there. Possibly hand-verify if necessary
> > the generated machine code. That way we know the basic compiler works
> > and implements all CIL opcodes. Optimizations such as SSA, dead code
> > elimination, loop unwinding, etc can all be implemented as transforms on
> > top of the base IR code stream. From the testing I've seen, I can't tell
> > whether bugs are in simple bytecode conversion or various IR
> > optimization algorithms. They all seem to be intertwined currently.
> >
> > But I'm liable to get my head chopped off by Chriss for the above. :)
> >
> I am not that cruel. ;)
>
> The idea was actually to get something working ASAP so that the ppl that
> wanted to work on the kernel
> can do something too. That is why there are lots of generated test cases
> for the x86 encoding but pertty few test
> cases for the rest of the AOT. I would just suggest to look in the
> source code for the TODO lines and there are
> lots of them. Most of it was just implemented when an exception raised
> because some part was not implemented
> yet. And most of the time the trunk kernel was the main source of code
> to test and develop the AOT. Johann and
> William felt pitiful enough to write test cases.;) Thank you guys!
>
> What I am trying to say is that I am the only one that is working on it
> and any help is more than welcome.


I have to say, Chriss, you are very helpful. But the AOT is a beast. A
black-box beast. Try not to blame a few of us for being intimidated. ;-)
The function-pointer-stub fiasco has scared me quite thoroughly into a
position of lurking...

So we have to keep our task list, (regardless as to whether or not we keep
it on Trac), needs to be fleshed out as thoroughly as possible. If someone
decides to work on something (preferably something that has been discussed)
before a task has been made for it, then they should, at the very least,
make a task for it first.

Alot of the craziness around here is simply because we don't keep things
organized. No one really knows what someone else is doing until the trunk
gets broken...
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
SharpOS-Developers mailing list
SharpOS-Developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sharpos-developers

Reply via email to