Hi All,
I've been looking into SEO for Silverlight a bit and found that it's not all as easy as it is made out to be. The issue is that in most apps the content (which is what you want indexed) is not part of the xaml. This would be the case if you just had static content inside the xaml but this is rarely the case with silverilght. The way that crawlers/spiders crawl content on dynamic sites (e.g. content from a DB) is by following links. So although you can have hyperlinks in SL it's not really the natural way to expose content as it's easier to download content via the network API vs. loading a whole new dynamically generated xaml page from the server. So the question arises how does flash handle SEO. Well apparantly there is some support. For example http://www.google.com/search?hl=en <http://www.google.com/search?hl=en&q=filetype%3Aswf> &q=filetype%3Aswf Now unless these sites are dynamically generating ( or more likely just have a custom handler for ) swf files then this means swfs are "parsable" and "indexable" though when it comes to exposing the state of the swf with deep links I don't think that works out of the box. So the question is how to implement SEO in general and deep links in particular for SL. My current theory is that you could pass the url/query string to SL by using the initparams: function createSilverlight() { Silverlight.createObject( "plugin.xaml", // Source property value. parentElement, // DOM reference to hosting DIV tag. "myPlugin", // Unique plug-in ID value. { // Plug-in properties. width:'600', // Width of rectangular region of plug-in in pixels. height:'200', // Height of rectangular region of plug-in in pixels. version:'1.0' // Plug-in version to use. }, { }, // No events defined -- use empty list. "param1, param2"); // InitParams property value. } Then the SL app would use these to set the state ("navigate" to specific content or set the selected index in a list of content). This would handle links from a search engine but you would still have to generate the urls and content for the spider to crawl. This I think is done at the moment using the "javascript" test. I.e. spiders don't execute JS and therefore you can take advantage of that by displaying xml/html with content and links if JS is "disabled". The first issue doesn't seem so difficult but the second one I'm finding hard to get my head around. Any thoughts? Jonathan Parker (MCTS - Web Applications) Email: [EMAIL PROTECTED] Blog: www.jonathanparker.com.au ------------------------------------------------------------------- OzSilverlight.com - to unsubscribe from this list, send a message back to the list with 'unsubscribe' as the subject. Powered by mailenable.com - List managed by www.readify.net
