As one who started out just using Timeline and then moved to the Exhibit application, I can say that I was a little disappointed that Exhibit didn't read XML. More because I had to change my application that generated the XML document over to output JSON instead than anything else, but I do still think it would be nice to have the choice of input formats.
Scott On Jan 25, 2007, at 9:15 AM, derek | idea company wrote: > Hi David and all, > What if instead of using json files Exhibit used standard XML/RSS > feeds. Google reads those when searching for them, and you can also > have your exhibit pugged by the ever popular Feedburner and other > fancy RSS readers. There might also be easier ways to integrate into > other software or upgrading to a Database if the dataset becomes > larger then a flat file can handle. > Cheers, > Derek > > Derek Kinsman > The Idea Company > > New Media Designer > > http://www.ideacompany.ca/ > http://boring.ambitiouslemon.com/ > 1.416.371.5652 > > > > David Huynh wrote: >> Hi all, >> >> Exhibit suffers from the same Achilles heel as other Ajax >> applications: >> the dynamic content that gets inserted on-the-fly is totally >> invisible >> to Google. My whole web site is now invisible to Google :-) >> Perhaps this >> is the biggest impediment to adoption. >> >> Johan has added some code that allows Exhibit to load data from HTML >> tables. This lets your data be shown even if Javascript is >> disabled and >> lets your data be visible to Google. However, HTML tables are >> clunky to >> store data. >> >> There is another alternative: inserting your data encoded as JSON >> between <pre>...</pre> and then getting Exhibit to grab that text out >> and eval(...) it. If Javascript is disabled, the data is displayed as >> JSON--not so pretty. >> >> However, if the data is fed from another source, such as Google >> Spreadsheets, then neither of these approaches can be used. >> >> We've also entertained the idea of using the browser's Save Page >> As... >> feature to snapshot a rendered exhibit and then using that as the >> public >> page. Exhibit still gets loaded into that page, but it would >> initially >> not change the DOM until some user action requires it to. However, >> the >> browser's Save Page As... feature doesn't do a very good job of >> saving >> the generated DOM. >> >> So, I think anything we do would look pretty much like a hack and >> work >> for only some cases. We also risk getting blacklisted by Google's >> crawler. So, what do we do? Is it possible to ask Google to scrape >> those >> exhibit-data links in the heads of the pages? And how do we do that? >> >> David >> >> _______________________________________________ >> General mailing list >> [email protected] >> http://simile.mit.edu/mailman/listinfo/general >> > > _______________________________________________ > General mailing list > [email protected] > http://simile.mit.edu/mailman/listinfo/general _______________________________________________ General mailing list [email protected] http://simile.mit.edu/mailman/listinfo/general
