I have done some simple profiling--see the second last section of
chapter 3 in my thesis
http://people.csail.mit.edu/dfhuynh/research/thesis/thesis.html
In general, DOM generation is costly, so avoid facets with many
low-count values, and views that display all items. We can try to
optimize the DOM generation code but I'm worried that would make it less
maintainable.
Perhaps the easiest first step at making performance appear better would
be to allow pagination (adding next page/previous page buttons) in the
Tile view. This is not trivial, though, if you have grouping in the view
because a group can span pages.
As for the database, loading data from Google spreadsheets incurs more
cost than loading JSON files.
I have already started the work on making Exhibit scale better by
offloading most of the computation to a server-side component, but it
will be months before I have anything usable. In the mean time, if
someone feels compelled to do more profiling, please do! :-) We really
really need all the help we can get!
David
Axel Hecht wrote:
> Sounds like this would call for some real analysis.
>
> Stuff I'd consider doing:
>
> Create a matrix with, say, three different testcases, going from dead
> simple graphs, to somewhat involved graphs, run them with variable
> amount of data (use some script to generate those), and time it.
>
> I'd compare regular json against transfer encoded gzip, too, that
> usually bangs the hell out of load time.
>
> The real interesting part is to actually determine how exhibit scales,
> and whether that scaling is necessary or if there's a bug at some
> point. I suspect that David did some firebug profiling already, but
> with scalable tests, that might be more helpful.
>
> Axel, so not volunteering ;-)
> _______________________________________________
> General mailing list
> [email protected]
> http://simile.mit.edu/mailman/listinfo/general
>
_______________________________________________
General mailing list
[email protected]
http://simile.mit.edu/mailman/listinfo/general