Marko,

Redfish is very interested in this. I especially like this line of thinking:

From 4.9:

...Once the Fhat RVM has
completed computing that particular RDF sub-network, it can halt and another CPU can pick up the process on a yet another area of the RDF network that needs computing by the RDF software. In this model of computing, data doesn’t move to the process, the process
moves to the data.


We are interested in models for distributed computing that can easily exist in very heterogeneous environments such as high performance computers/web service servers/desktop PCs down to phones and other specialized network devices with low levels of resources, but interestingly also lowest latency with regard to the so called 'user'.

Would this new language make managing data and process on multiple computers easier to program for in a more general sense? How do we make a network based computer that gets us away from having to worry about where a particular data set is -- or where a particular process is running? I know this is focused on the semantic web but can this help me deal with manageing my many overlapping data streams that I want available on any computer I come in contact with -- such as model output or more importantly digital photos, mp3s, and videos?

I think a wed-tech talk would be very welcome.

--joshua

---
Joshua Thorp
Redfish Group
624 Agua Fria, Santa Fe, NM




On Apr 26, 2007, at 8:01 AM, Marko A. Rodriguez wrote:


LANL is currently building a compiler and virtual machine that is compliant with the specification in the paper. If RedFish is interested, perhaps in a month or two, I could demo this computing paradigm at a Wednesday tech session.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to