I'm not sure cajoling would be precluded by external <script src> includes.
Once Caja's ready, in any case gadget features will have to be carefully
protected - or, features themselves might be cajoled, which if done in a
sensible way will allow us to specify symbols which Caja would safely
export, allowing binding in the gadget space itself. In either case, again
it seems Caja's not mature enough for us to define this binding strategy
yet, so punting is a reasonable option as you say :)

Client-side perceived latency, your second listed risk, is indeed the likely
bigger deal. I'd wager that several sites would be willing to take the
server-cost hit in exchange for better client-side performance. To what
extent do you think you could make the approach you use configurable?

Lastly, why need this approach require one request per feature? It could be
more efficient to bundle all features together in a single JS request as the
Java JS servlet supports eg.
<script src="...feature1-version1:feature2-version2:feature3-version3.js"/>

Granted efficiency of this approach depends on features requested by all
gadgets on a given page, to facilitate maximum script-sharing, and those
optimizations are the sort of thing we could add sometime in the future atop
the rendering and RPC calls.

--John

On Fri, May 9, 2008 at 1:31 PM, Chris Chabot <[EMAIL PROTECTED]> wrote:

> Hey guys, i could do with some advice.
>
> == the problem ==
>
> In the java version, the features are all parsed and their javascript
> content is loaded into memory. This works on the java side, and gives the
> opportunity to cajol the entire content in one fell swoop, so that works
> great.
>
> Now on the PHP side of things things are a bit difference since PHP works
> in a process-per-request type situation, so parsing the entire features
> structure each request is non-doable, would make any semi decent performance
> impossible to achieve, so instead i process the features once and cache the
> entire resulting structure in cache, thats about twice as fast as processing
> the features structure each request, so it's survivable.
>
> Survivable but far from optimal, it's a lot of information to read from
> cache, and a lot of memory consumed (since every process has it's own
> instance, it adds up), so that puts a good bit of pressure on the server's
> IO.
>
> To add some measurability to this, on a quad core @ 3ghz workstation this
> gets me about 420 pages a second with apache bench.
>
> == the solution ? ==
>
> The main problem is the overhead of loading all the features javascript
> each request, this consumes tons of memory and takes loads of IO, so the
> overly obvious solution is to not do this anymore :)
>
> So what would work is that i make all javascript external (<script
> src="...">), generate script tags for each feature (and it's dependencies)
> and modify the javascript handler (/gadgets/js) to only output the
> javascript for the requested feature.
>
> There are a few possible downsides i can identify though:
>
> More requests, one per feature, however with an expiration data in the far
> far future and a cache busting version param, this should be negligible..
> besides the amount of bandwidth used would go down tremendously (a few small
> kb for a gadget instead of 180kb or so because of all the inline
> javascript), so the combined browser side caching of savings on bandwidth /
> time to transfer all this info ... should actually have a positive effect
> right?
>
> The second risk is that it could add some perceived latency since the
> gadgets.config.init() and the onLoad handlers can't be called until the
> document has completed, which includes handling the javascript files, and
> whatever external resources the gadget includes..  this is probably the
> biggest problem of this solution.
>
> And finally, it would make cajoling impossible probably ... but that
> doesn't concern me so much right now since we don't have a mechanism for php
> shindig to do that anyhow :)
>
> With that 'small' modification, the pages/second shoots up to 630, a very
> significant increased, and that's with just a few hacks and not a proper
> implementation of this option.
>
> So the performance gains seem significant enough to consider this,  both in
> server load, pages/second and bandwidth saved.. however as mentioned,
> there's a few risks involved too.
>
> What do you all reckon would be the right solution here? Hope i can get
> your opinions on what would be the better choice here since I'm a bit torn
> between the two.
>
>        -- Chris
>
>

Reply via email to