The last push doesn't change how edbrowse operates, or it's not suppose to, but 
it does lay some groundwork for something I wanted to try,
something that mostly doesn't work but it can be fixed.

There's a global variable down_abg; automatic background,
that if true will download the javascript files in parallel in the background,
then wait for each in turn to be downloaded before it executes.
This is what modern browsers do to save time.
It builds entirely on the bg system to download large files in the background, 
so wasn't a huge amount of code,
and it works, in my early tests.
I ran some tests using my local web server and it seems good, including pulling 
from cache when it can, putting back into cache when it can,
just as we do in foreground, etc etc, but,
I tried it on nasa.gov and it failed.
The reason is, it's a secure site.
The failure is connected with curl and ssl.
So I tried something simpler.
I found a secure file that triggers the download in background option, and it 
also fails.

bg+
e https://script.crazyegg.com/pages/scripts/0070/1109.js
  download 1109.js:
hit return
  SSL connect error in libcurl: A PKCS #11 module returned CKR_DEVICE_ERROR, 
indicating that a problem has occurred with the token or slot.

See if this replicates for you.
Could also be I use gnutls instead of openssl, or whatever the two rival 
systems are.
That causes me some other issues as well.

I wrote bg like ten years ago, and we didn't see this bug?
No - because of the evolution of the internet.
Ecommerce sites were https, but normal information sites were http, or even ftp.
Now almost all of them are https.
You can't download a recipe for apple pie without https.
So bg likely never worked with https and we didn't know about it.
I can't download a personal file in the background, and edbrowse can't download 
js or css files in the background, if secure.
And why?
Almost certainly because I fork a process to do it.
That was the easy way at several levels, but not the best way, and not the way 
that curl is written for.
Curl is written to run under threads, and then I'm pretty sure all this would 
work.
I'm not sure I have the energy to do it though.
If someone wanted to convert my background machinery from fork to threads, that 
would be great!
Then you could uncomment the bgjs commands in buffer.c,
and we could turn on and off the feature, and see if it actually saves time,
and after all that work it might not, but it might.
And it has the follow-on advantage of postponing the async scripts until they 
load, whenever that is,
and letting the user look at the page in the meantime.
I've thought about how to do that, maybe with virtual timers, but one step at a 
time.

I could also load css in background but that is yet more complex code with 
almost no benefit.
All the css has to load and process before we can run our first line of 
javascript, so I would put the fetch in the background, then stop and wait for 
it to finish, and what's the point of that?

Karl Dahlke

Reply via email to