As per a discussion that has been taking place off line,
and really needs to move to this group,
js has to immediately, and within its innerHTML setter,
parse the new html text and add the new objects to the js tree,
while at the same time, or not long there after,
adding the tree of nodes to our tree for rendering.
Both processes now need tidy5, html-tidy.c,
and at least half of the logic in render.c.
With this new revelation,
how much easier would all this be if we hadn't separated edbrowse-js into 
another process!
As Fagin says in Oliver,
I think I better think it out again.

Don't get me wrong - encapsulating js into a separate entity of some kind,
with its own source file, and the mozilla details hidden in that source file,
and a communication api to and from the js layer,
was absolutely the right thing to do. Absolutely!
Thank you Adam for directing us down this path.
But we did the same for tidy without making another process.
Now if they were once again the same process,
possibly different threads of the same process,

1. One less hassle with the windows port, as threads are standard
and portable, and the spinning off of the process with pipes not so much.

2. js innerHTML and document.write can build js objects and add to our tree of 
nodes
immediately, in the setter, as is suppose to happen, and all in one go,
all at the same time.

3. No need to pass the html, or the resulting subtree,
back through the pipes and back to edbrowse for incorporation.

4. Better performance (a minor consideration).

5. All of edbrowse is once again a c++ program (a minor nuisance).

6. seg fault on the js side would once again bring down all of edbrowse.
This was one of our considerations,
but I would hope those seg faults are becoming infrequent, and I think they are.

If we really must keep them separate processes, could we use shared memory
so both can work on the one common tree of nodes?
Is shmget portable to windows?
Doesn't shmget require a fixed block of memory of a fixed size?
That's the way I remember it.
that's how the man page reads.
That wouldn't work well with our model;
I want to be able to dynamically grow the tree as big as the web page is,
without compile time constraints or even run time committment to a size,
as we have to do for instance with mozilla's js pool.
I mean we could set a pool size at run time for the trees of html nodes managed 
by edbrowse,
wouldn't be a show stopper, just not my first preference.

After the last flurry of work settles down and stabilizes,
and this has been all good stuff,
all moving us forward in the right direction,
but after that settles we need to discuss
and plan and design before making the next big change.
We either need to move some html / render functionality into both processes,
with subtree data coming back through pipes,
or combine things back into one edbrowse process,
or find a shared memory solution.


Karl Dahlke
_______________________________________________
Edbrowse-dev mailing list
[email protected]
http://lists.the-brannons.com/mailman/listinfo/edbrowse-dev

Reply via email to