In comp.soft-sys.ptolemy, "Taki" <[EMAIL PROTECTED]> wrote:

> I plan to integrate Ptolemy Classic or Ptolemy II in my research work.
> I'd like to ask to you some questions about the *code generation* facility:
> 
> 1) Where can I get some document about this facility?

Ok, I'll take a shot at this and Professor Lee and others can
correct me as necessary.

Jeff Tsay developed a code generation system as part of his Masters
degree, see:

Jeff Tsay, "A Code Generation Framework for Ptolemy II," ERL Technical
Report UCB/ERL No. M00/25, Dept. EECS,
University of California, Berkeley, CA 94720, May 19, 2000. 
http://ptolemy.eecs.berkeley.edu/publications/papers/00/codegen/

There is a shorter summary of Jeff's work at:
Jeff Tsay, Christopher Hylands and Edward Lee, "A Code Generation
Framework for Java Component-Based Designs," CASES 00, November 17-19,
2000, San Jose, CA.
http://ptolemy.eecs.berkeley.edu/publications/papers/00/javacodegen

Jeff's work was a prototype of how we could do code generation in a
different manner, described in the above references and below.
Unfortunately, his prototype code was not easy to extend to match
changes in the Ptolemy type system.  More specfically, the type system
is Yuhong Xiong's area of research, and Jeff's work was not easy to
extend to deal with ArrayTokens.

Another little bit of info can be found in the MoBIES quarterly
report.

[Currently, many of the Ptolemy group members here at UC are working
under the Model-Based Integration of Embedded Software (MoBIES) project]

The second quarter MoBies report, dated 6/01 at
http://ptolemy.eecs.berkeley.edu/projects/mobies/reports/01/2 says:
says:
--start--
Code Generation
===============

We have gone through an extensive evaluation of our alternatives, and
have decided to rearchitect the code generation framework in Ptolemy II
using Soot, from McGill.  Part of the discussion follows:

>From Prof. Shuvra S. Bhattacharyya, of the University of Maryland, now
under subcontract:

  "I have been reading up on some of the alternative intermediate
  formats, such as Soot and that provided by BECL. These offer
  interesting possibilities for doing extensive, lower-level
  optimizations, but it is not clear to me how these are essential to
  our near-term concern of getting a robust, extensible code generation
  infrastructure in place.

  I think we might want to stick instead with the standard Java AST
  representation we already have, but with a better implementation. The
  current representation is suitable for all of the important
  transformations that we need at this point (type specialization,
  method devirtualization, certain forms of dead code elimination,
  etc.). It is also very intuitive to work with due to its close match
  to the Java language. This is an important consideration to
  facilitate development of different back-ends. If we decide later
  that we want a different representation to do more extensive analysis
  and optimization, it should be easy to plug it in by transforming
  from/to the existing representation.

  We still have to work with bytecode when working with referenced
  classes. But Christopher's ASTRefect already does a nice job of
  extracting the essential information without dealing explicitly with
  the bytecode representation. It should be straightforward to port
  ASTReflect to an alternative implementation of the existing AST
  representation. Porting the semantic analysis code will require more
  work.

  SableCC (http://www.sable.mcgill.ca/publications/#tools98) seems to
  offer the key features that we want in the front end, ... and that
  looks like a promising way to generate the AST implementation, along
  with the lexical analyzer, parser, and AST walkers. JavaCC is an
  alternative, but it suffers from some of the drawbacks of our
  existing implementation (e.g., numbered references to a node's
  children).
---end---

Currently, Steve Neuendorffer is developing a code generation system
using Soot.  Professor Bhattacharyya is working on C code generation 
using Soot.

We are working on several forms of code generation:

Shallow code generation - which means that we take a model and
generate a .java file for that model where the generated code
uses the Ptolemy kernel, actors and domains.  This can be thought of
as a MoML to Java converter.  Shallow code generation is very valuable
in helping us figure out issues around heterogeneity.

Currently, we have shallow code generation working on 150 of the 170
auto tests in our development tree.  The failing tests are mainly
problems with the Discrete Time domain and a few one off issues.
I feel that Shallow code generation is fairly robust at this point.

Deep code generation - which means that we take a model and
generate a set of java files that do not use much of the Ptolemy
system at all.  In Jeff Tsay's work, he was doing deep code generation
that used some of the ptolemy.math classes, but other than that, the
generated code did not use the other Ptolemy II classes at runtime.

The idea with deep code generation is that once we have stand alone
code, we can use a native Java compiler like the GNU GCJ compiler, or
we can generate C directly.

Steve is working hard at deep code generation, but I don't believe we
have a complete working example yet.  What we do have is the graphical
interface to the code generations working, and we have a testing
infrastructure in place.  We would like to have deep code generation
in the next Ptolemy release, scheduled for sometime in the first
quarter of 2002, but we are not likely to ship code gen unless it is
useful.

> 2) What are the (current and future) differences between Ptolemy Classic
> and Ptolemy II about code generation?

In Ptolemy Classic code generation, each separate platform had a
separate domain.  IMHO - This was a slight misuse of the domain
concept, since all most all of the separate platforms really had SDF
semantics, and each separate platform was really a target for the SDF
domain.

In Ptolemy Classic, if you wanted to generate code for a new
processor, you had to create a new domain and then populate the domain
with new basic blocks that contained codeblocks of code that were
generated when the basic block was used.  This is a bit of a
simplification, but basically it meant that for each new processor,
the author had to generate a new Ramp basic block, a new Add basic
block etc.  This was very time consuming, and tended to have problems,
since if a bug was fixed in one Ramp actor, the bug needed to be fixed
in other Ramp actors in each domain.

The ACS domain was an effort to work around this issue, where the
interface to each actor was shared between multiple implementations.
This helped make it easier to switch between different target
implementations, since the ACS Ramp actor always had the same
interface, whereas if there were two Ramp actors in two separate
domains, then they might have different port names, which made
switching between the domains difficult.

Now, the Ptolemy Classic style of code generation is great if you
want to customize different actors to take advantage of different
features of a processor.  For example, the Motorola 56x FIR filter
actor could have different codeblocks that would be used depending
on how the FIR filter was configured.

However, the downside of this approach is that the inter-actor
communication tended to consume quite a bit of time, so even with
really great actor implementations, we were getting hit when data was
passed between these actors.  It seems to us that looking at the whole
model would yield further performance improvements.


The Ptolemy II code generation approach is to generate an Abstract
Syntax Tree (AST) for the entire model, and then apply standard and
custom compiler optimization techniques to generate faster code.

Currently, in Ptolemy II, it is not possible to generate custom code 
using a codeblock.

We are using Soot to read in the .class files from the Ptolemy II
kernel, domains and actors and generating ASTs for those classes
and then processing them to generate new hopefully optimized .class
files.


> 3) Can generated code be executed by multi processors and parallel DSP
> embedded systems?

The short answer is 'No, not really at this time'

The longer answer is:

Partitioning a model between multiple processors is fairly tricky, and
was a large area of research in Ptolemy Classic.  Only models with
a high degree of parallelism are amenable to running on multiple
processors.  Simply assigning each actor to a processor is not likely
to help much, since the inter-actor communication will really bog
things down, especially on a high latency system like a switched
ethernet network.

Java can take advantage of multiple processors, so in theory, if we use 
the process domains in Ptolemy II without code generation on a multi
processor machine, we should see that each Java thread will be run
on a separate processor.  However we have not done much work in this
area.  I'd like to see someone take the PN domain and work on
running it on multiple processors and seeing what sort of improvements
can be made.

In theory, once we have deep code generation, and we are creating an
AST and generating .class files, we can generate code for any
processor by either writing a new back end, or using a native Java
compiler like gcj.


> Thanks in advance,
> 
> E. Lawson.

Again, the above are basically my opinions, and I welcome corrections
and feedback.

-Christopher

Christopher Hylands    [EMAIL PROTECTED]  University of California
Ptolemy/Gigascale Silicon Research Center     US Mail: 558 Cory Hall #1770
ph: (510)643-9841 fax:(510)642-2739           Berkeley, CA 94720-1770
home: (510)526-4010                           (Office: 400A Cory)

----------------------------------------------------------------------------
Posted to the ptolemy-hackers mailing list.  Please send administrative
mail for this list to: [EMAIL PROTECTED]

Reply via email to