Perhaps I should have entitled this, "Binary interoperable data types
application virtual machine?"

I just signed up for this mailing list for a specific purpose, and note I
had some brief discussions on the HaXe mailing list in Q4 2008.

My motivation is summarized at the end of this post.

SPECIFIC INQUIRY: Could any one here share with me their thoughts and/or
point me to some prior discussion or work, on application virtualization
(http://en.wikipedia.org/wiki/Comparison_of_application_virtual_machines)
with maximum binary data interoperability across languages?

After reviewing the design decisions for NekoVM:

http://nekovm.org/doc/misc/multilang
http://nekovm.org/lua

What would be the implications of a VM data type structure that is binary
compatible with C?

Before I describe technically, my technical design motivation is based on
the observation that the fastest executable code usually is C (and perhaps
some hand-code assembly), and dynamically typed languages are most useful
for productivity of the programmer, in cases where execution speed is not
the overriding priority.  If C code snippets are modularized (granular
snippets, platform dependencies in standard libraries) to be more
composeable/interoperable in the way I envision, then portability of C is
not a valid concern (i.e. "write once, execute every where" is really a
non-issue, just compile the performance boosting C snippets for the
different processor targets).  Thus there is not a strong logic to give
priority to speed of the dynamically typed language at the cost of loss of
speed for the "C FFI" (http://nekovm.org/doc/ffi) due to binary data
conversions.  It is the "80/20" rule or Paretto principle, of focusing
design designs on the priorities that yield 80% performance for 20%
effort, and forsaking the 20% remaining performance to save 80% of effort.

Thus, why wouldn't we want a VM to represent all the dynamic data types
internally to be binarily compatible with C, with the dynamic metadata
stored in a separate location?

For example, a dynamic variable could be a pointer to a struct that
contains both metadata (i.e. type flag) and a pointer to the actual memory
location of the data.  Then objects could be stored like C structs with
object members packed in struct order in memory.  Thus there would be no
need for data conversions or repacking of data in order to benefit from
maximum performance in the C code, e.g. an array of pixel data for image
would be stored in it's native binary format.

As far as I can imagine at the moment, this would combine the benefits of
dynamically typed virtual machine with the performance of C binary data
formats.  The dynamic typing would not be broken, and although it would be
slightly slower, the C code would be faster and data storage would be more
efficient, e.g. no copying of native image formats into dynamic objects
that are stored with interleaved metadata.

I have not thought through the aspect of closures and functions being
first-class data.  Nor have I completely thought through callbacks from C
to the dynamic VM, but seems to me the data conversions could be more
efficient, because the metadata could be built orthogonally to the packed
values data.  Also I am not experienced in language design nor the design
considerations for a VM, i.e. register versus stack based, etc., although
I have used many languages and platforms over decades.

Also wouldn't reference counting with Bacon's garbage collection be
superior, with the C code interfacing to both:

http://en.wikipedia.org/w/index.php?title=Reference_counting&oldid=338521402#Dealing_with_reference_cycles
http://en.wikipedia.org/w/index.php?title=Reference_counting&oldid=338521402#PHP
(used in PHP 5.3)

>From my research below, polymorphism via interfaces (i.e.
orthogonal+composable, no run-time obfuscation of semantics via virtual or
prototype pointers) is superior to virtual or prototype inheritance, and I
am pondering that cross-language interfaces could be enforced at compile
(or JIT/interpreter) time?

As far as I can see, binary-independent data interchange paradigms such as
Thrift, impinge upon the ease-of-use of dynamic languages (illuminating
the desireability of NekoVM's design decision on data level
interoperability, and I am suggesting improving that and adding interface
interoperability):

http://en.wikipedia.org/wiki/Thrift_%28protocol%29

Has anyone written a comparison of NekoVM to Squirrel, which seems to have
a larger feature set with only 12K lines of code?

http://en.wikipedia.org/w/index.php?title=Squirrel_%28programming_language%29&oldid=307642209#Language_features

Any other pointers to information or discussion in the line of my
expressed conceptual interests?


========BACKGROUND MOTIVATION===========
My current broader efforts have from time-to-time revisited the concept of
enabling more granular code reuse (economic sharing) in order to
facilitate unsynchronized programming contribution in small increments,
versus the current aggregation model of synchronized group effort (e.g.
large open source monoliths, Google's large internal code base, etc). 
Tangentially, unsychronized (i.e. more diverse competition) code
publishing could possibly enable an "aggregated economy-of-scale scale"[2]
micropayments economic model, possibly even P2P hosted, thus giving rise
to orders-of-magnitude increases in motivated/funded programming
contribution on the WWW.  My understanding of the mathematical structure
of nature, is that top-down (centralized) and/or secret (aka oxymoron
"secure" or "insured") structures are always decaying (under attack, e.g.
China's infiltration of Google's code base, viruses targeted to most
widely distributed code base, etc) after some local orders of exponential
growth.  Security of data is increased when the code is public, and when
both code and storage of data are diversified in the wild ("given enough
eyeballs, all bugs are shallow").  Diversity of sharing, i.e. an increase
in entropy, is always the long-term winning trend, which oftens stagnates
then proceeds with paradigm shift avalanches of diversity, openness, and
sharing.  The time seems especially ripe now[1].  At following links, I
summarize this math/logic, along with the implication that interface
granularity of polymorphism is superior to run-time virtual inheritance
(base classes and prototypes are harmful):

http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html
http://www.coolpage.com/commentary/economic/shelby/Mass-Entropy_Equivalence.html

More on the superiority of interface polymorphism:

http://www.haskell.org/~pairwise/intro/section2.html#part3
http://www.haskell.org/haskellwiki/?title=Why_Haskell_matters&oldid=24286#Haskell_vs_OOP


[1] Developed world depends on rule of law and insurance, an economic
model that appears to be peaking with the peaking of world debt at levels
never before in history.  The centralized institutions are bezerk,
monetizing fraud to hold together an unsustainable statistism (e.g. 500+%
debt-to-GDP ratio in USA including unfunded govt liabilities).  The 1800s
were 339 times more prosperous for savers:

http://www.marketoracle.co.uk/Article16212.html (3102 reads already)
http://www.gold-eagle.com/editorials_08/moore010410.html
http://financialsense.com/fsu/editorials/moore/2010/0105.html

Nature's laws always win over man's law, and the inability of man to
insure against the randomness (increasing entropy) of nature is throughout
history repeated.  In the boomers lifetime, aggregate CPU power in world
has increased a million-trillion times (Tflops on current gaming CPUs,
http://www.defmacro.org/ramblings/fp.html, see IBM Mark 1), which will
soon erupt out of the centralized supression (patents, copyrights, large
synchronized funding models) out into a technology epoch which coincides
with nature's law (evolution) trumping man's law (and self-destruction of
the govts, as promised in Romans 13 and 1 Samuel 8, and fulfilled in 1
Samuel 15...btw evolution and the bible are not incongruent, i.e. time in
bible is fractal not linear):

http://www.caseyresearch.com/displayCwc.php?id=25  (see the Technology
chart halfway down page)
http://www.caseyresearch.com/displayCwc.php?id=31 (parallels to Rome)
http://www.caseyresearch.com/displayCwc.php?id=30 (dying nation state model)
http://www.silverbearcafe.com/private/01.10/thinklikeabanker.html
(resetting debt economic model)
http://www.silverbearcafe.com/private/01.10/moremoney.html (dying central
bank model)

Social networks generate 200% more revenue per user in China, because the
society (culture) needs virtual technology to sidestep the dying nation
state cancer:

http://digital.venturebeat.com/2009/03/19/the-worlds-most-lucrative-social-network-chinas-tencent-beats-1-billion-revenue-mark/
http://cnreviews.com/business/research-insights/top-4-reasons-why-chinese-social-networking-different_20090810.html
http://www.techcrunch.com/2009/04/05/chinese-social-networks-virtually-out-earn-facebook-and-myspace-a-market-analysis/

Google's centralized code and database hosting model can't compete:

http://www.techcrunch.com/2010/01/12/google-china-attacks/

We probably won't be able to maximally utilitize 1000 cores (7 - 11 years
given Moore's law), without new programming paradigms:

http://www.coolpage.com/commentary/economic/shelby/Functional_Programming_Essence.html#Advantages

One internet visitor per core does not solve the scaling problem, and
multi-threading doesn't scale either.  STM and other bandaids push the
aliasing error of the mismatch of the non-parallel algorithm paradigms
into the destruction of composability/interoperability:

http://lambda-the-ultimate.org/node/3637#comment-51713


[2] Micropayments do not work when people have to make repetitive
micro-decisions on repeated micro purchases.  Aggregation models such as
iTunes 99 cents micropayments work, to some degree because users focus on
their balance, not deep reflection on each purchase.  The economic model
of the WWW has been loss leaders to gain marketshare, then
economy-of-scale monetization with advertising and upselling to a fraction
of the userbase (i.e. socialism/theft/misallocation of capital, as the
minority is funding the majority).  Virtual currencies and debt, are an
aggregation model proved in the wild of online gaming.


P.S. To put a face behind this post: 
http://www.coolpagehelp.com/developer.html

--
Neko : One VM to run them all
(http://nekovm.org)

Reply via email to