OK, we can reduce th size significantly if we use a function to build up the array of function names, like:[ ... ]
addWithPrefix("cdpf_set_", new Array( "action_url",
"viewer_preferences", "word_spacing"));
This might not be the correct JS sytnax, but you can get it. We only use the cpdf_set_ prefix once instead of multiple times (all these function names start with that prefix).
We need to make this prefix optimization automated if we would like to generate small files, so some PHP code can decide on what functions to add with this prefix trick. This does not mean that PHP will generate the JS on the fly. It can be generated on the rsync server to a static file. It will be updated if the function list changes.
Would someone have some motivation to look into this prefix optimization (or any other size decrease ideas for that matter)? I beleive we can significantly reduce the size of the JS this way.
I have created some sort of compression. It assigns a number to each word, which says how much of the first letters are in common with the previous word. It's only one hexadecimal character, so it is in the range 0-15. I don't know the official name, but I saw it in ispell dictionary files. I call it 'prefix compression', but this name already stands for another type of compression.
An example to make this clear:
_ abs acos acosh add addaction addcolor addcslashes addentry addfill addshape addslashes addstring aggregate aggregate_methods aggregate_methods_by_list aggregate_methods_by_regexp aggregate_properties aggregate_properties_by_list aggregate_properties_by_regexp
becomes
0_ 0abs 1cos 4h 1dd 3action 3color 4slashes 3entry 3fill 3shape 4lashes 4tring 1ggregate 9_methods Fds_by_list Fds_by_regexp Aproperties Frties_by_list Frties_by_regexp
If someone knows a script that does this kind of compression, or would like to write it, you're welcome.
You can find an example with the function names from '_' till 'cyrus_unbind' (with manual compression!) on
[ http://lumumba.luc.ac.be/cheezy/misc/php/prefix_compression.html ]
This gives a compression from 3588 bytes to 2261 bytes (that would reduce the full lists from 42kB to 25-30kB, so the complete file would be about 45kB (or 40kB if we remove all whitespace)). The 'uncompression code' is really short, and it doesn't give a noticable speed difference.
Last version for reference:
But still with lots of bugs. Maybe I can do the searching on the compressed text, this might give a speedup.
[ http://lumumba.luc.ac.be/cheezy/misc/php/quickref_bs.html ]
Goba
Jan Fabry
