On Mon, Mar 2, 2009 at 11:42 PM, Luke Breuer <labre...@gmail.com> wrote:

> On Mon, Mar 2, 2009 at 10:15 AM, Nathan Cain <n...@inverse-engineering.com
> > wrote:
>
>> I am wondering what work is being done/planned for targeting FPGA
>> platforms.  My interests overlap with this field, and I would like to
>> contribute towards such an effort.  I have experience with quite a few "non
>> traditional" HDLs, and would like to help OMeta to grow into the first
>> "meta" design language.
>
>
> I would like to be kept in the loop.  I started looking into higher level
> HDLs and took some notes [1], but did not get very far.  I will be
> refreshing my VHDL knowledge within the next month for a project at school
> and would love to also discuss how to take things "higher level".  I wonder
> if IS (see STEPS [2]) could be made to target FPGAs -- in fact, this might
> almost be an ideal situation if I understand IS correctly.  (I am not sure
> how OMeta compares to IS.)
>

I notice that you link to Confluence and Atom.  Confluence is one of the
approaches that I've tinkered with rather extensively (having written the
JHDL generator) but I've never gone beyond toy examples in Atom.  I've done
very little with VHDL, as I'm a verilog designer by preference.  What little
VHDL experience I do have is mostly in integrating or gluing others' IP.

The Progress Report makes mention (Pg. 29) of a small CPU implemented in
verilog as a potential target.  However, where I see real potential in
OMeta/IS/idst as a hardware platform is precisely in eliminating the
traditional fsm cpu model.  As I see it, the fact that the very core of the
transformation engine (sequence or parallel choice pattern match) maps
directly to the nature of FPGA architecture (route through sequential
clocked registers or parallel combinatorial logic) hints towards some very
strong potential performance gains by moving to a finer granularity.  Also,
in my experience, it is precisely in this area that projects such as Atom et
al, MyHDL, and SystemC (shudder) fall very short, especially when it comes
to verification time.  Hardware may be "software crystallized early"... but
our control over and confidence in that crystallization process currently
feels rudimentary at best.

The ideal would be to, as Ian mentioned, go directly to bitstream.
Unfortunately, with most netlist formats being proprietary and closed, we
start to reach the boundaries of uncharted territory very quickly.  Also,
self-reprogramming is still a ways out, and partially-reprogrammable FPGAs
(that could be coupled in pairs to reprogram each other at run-time, for
example) are only just beginning to see daylight, and the waters are murky
at best.  For the time being, work will most likely have to be done in some
awkward master/slave relationship with some host PC, using EDIF as a
surrogate bitstream format, and vendor synthesis tools.  I've often wondered
if the whole vendor problem could be side-stepped somehow, but have yet to
see an answer materialize.  A proper bootstrap will certainly be a grand
task, but not an insurmountable one.



>
> I think developing a higher level HDL is a fantastic idea and almost *
> required* to research what "mega-core" processors should look like. People
> who are trying to figure out parallel programming "languages" without
> understanding HDLs are, well, an enigma to me.  The "orthogonalization" that
> the VPRI folks are working on (see [2]) seems like it will also be crucial.
> I have not found anyone *else* trying to break programs down in a truly
> interesting way.


I couldn't agree more.  If a clean morphism can be found between the
OMeta/IS abstractions and physical gates then in a way the "mega-core"
processors already exist, sitting in source control, today!  There is also
no reason that these "processor specs" couldn't be "spun on the fly"... Your
mega-core processor specification might just be some ecmascript code, or a
css sheet.  It might be pulled from a URL or tapped into a repl.  Taking
this a step further, one could consider x86 machine code (or more
practically, now, JVM bytecode or the like) as a "frontend language" on this
sort of fine-grained, self-programming (and assumed self-optimizing, to a
point) hardware architecture.   Could existing, real-world, possibly even
binary-only applications be "virtualized" to run as a "sea of gates" sitting
"below the metal?"  Is this worth exploring?

--Nate


>
>
> Luke
>
>
> [1] http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx
> [2] http://www.vpri.org/pdf/tr2007008_steps.pdf
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to