Hello Nick,
"BCS" <n...@anon.com> wrote in message
news:a6268ff14f568ccd8dfcc495...@news.digitalmars.com...
Hello Nick,
I would disagree that "JS == high-functionality and non-JS ==
low-functionality" is true in the general sense. I'm not saying that
the reverse is necessarily true, but I think that's a false
dichotomy.
I see three cases:
-JS and non-JS look and feel the same to the user (do non-JS)
-JS can be tacked on the non-JS and people without scripts just loose
some
non critical functionality.
-Some functionality can be done via JS or no-JS but they work in
completely different ways. (like voting: do you reload the page or
not?)
The first two cases are uninteresting and I was ignoring them. In
the third case, If you assume the devs aren't stupid, the JS version
has to have higher functionality than the non-JS one.
Hmm, I guess we see things differently here. To me, that voting
example seems a prime candidate for case #2.
For it to be case #2, the parts that work without JS would have to work the
same way both with and without JS.
Make it work with a
normal non-JS page request, and then toss in a JS-override that,
instead of being a new page request, just sends a little message to
the server which, in turn, does exactly what it nomally would, but
without rendering and sending a new page. Ie, just a minor variation
on what's normally done.
Lets use that as a test case:
Pure JS: the onClick action triggers an AJAX POST call that records the vote
and reports success or failure.
Pure non-JS version: the vote button is done via the same server side code
as above but now the HTML includes a form element and extra information is
package along with the POST to the server can give an HTTP 303 back to the
original page (plus an anchor) Somehow (and extra query parameter tacked
on the end?) the system keeps track if it should report success or failure.
(All doable but little more complex)
Ether or: The server side looks the same but the HTML render does the non-JS
version and sets the onClick. Also you need to include some JS code that
disables the form to prevent the page from reloading.
Option three has all the complexity of 1 and 2, with very little overlap
plus some more. As for that overlap, I think all of it should be library
code (the URL decoder) or not even aware that this is a web page it's dealing
with (the business logic).
Am I making that more complex that it needs to be?
Plus, remember about ten or so years ago when there was a lot of
discussion in the web dev world about page-loading times, and the
general consensus was that anything that took longer than a few
seconds to load and render was bad from a usability standpoint?
Well, now it's a regular occurrence for these JS sites to take much
more time than that, and then still act sluggish once you're there,
and that's despite increases in computational power (even for
low-tech me), despite the fact that we've gone from mostly-dial-up
to mostly-broadband, despite the *claims* that it's allegedly faster
(it frequently isn't in actual practice), despite the fact that *we
knew better* 10+ years ago.
Up front times vs. background times <copy arguments from above :)>
Yes, but from a usability standpoint (especially for the average Joe),
that distinction doesn't make one bit of difference: It's still a
delay before being able to deal with the page.
Foreground stuff has to load NOW. Background stuff has to load before the
user gets around to using it. The second deadline can be orders of magnitude
later than the first. Everything I can see without clicking or scrolling
should be loaded in the foreground. A lot of the rest can (can!=should) be
done in the background.
--
... <IXOYE><