(Note: this portion of the reply copied to the mailing list) The current plan is to replace hyperdoc and graphics with HTML5.
The browser is now a "universal front end". It is possible to do 2d/3d graphics in a canvas, do hyperlinks directly into the source code, execute interpreter commands and see the results displayed inline in the page (this code exists). For hyperdoc, there is an implementation started (Volume 11) which, in theory, can talk to the interpreter. This used to work but something recently broke. When it worked, you could type Axiom input and see the results. You're welcome to look into it and ask questions. If you want to try the browser-based hyperdoc, look at Volume 11 and follow the instructions. For graphics, we need to define the exact API for the graphics subsystem and then implement that API in the browser in Javascript. Drawing would occur on a canvas. If you're interested, try to reverse engineer the API used by the graphics subsystem. See Volume 8. The file format and the call graph of the C code are documented. We need to discover and document the API details. Websockets will allow us to define multiple streams to the interpreter and each one can be opened in separate namespaces in the interpreter. I tried to use websockets (hunchentoot-based) but I'm missing something about the protocol, causing it to fail to connect. I have an example I've been working on. Google SHRDLU, which is Winston's PhD thesis work. I have created 3D objects that I can move. I have an input field that tries to connect back with websockets. The goal would be to use Axiom to interact with the SHRDLU to do computations like minimum paths, center of gravity (for balancing stacked blocks), and create new 3D objects (like a parabolic bowl) that can interact with other objects (e.g. a ball in the bowl). Tim Daly _______________________________________________ Axiom-developer mailing list [email protected] https://lists.nongnu.org/mailman/listinfo/axiom-developer
