Hi,  What you highlight is definetely on the radar of Microsft I believe. They 
had
a great session at Mix08 titled Advanced SEO for web developers ( 
[http://sessions.visitmix.com/?selectedSearch=BT03]
http://sessions.visitmix.com/?selectedSearch=BT03 ). But I also watched another
video where the silverlight team talked about the work they were doing with SEO
in particular with 2.0 because now the distribution method is nolonger xAML but
XAP's which as we all know are zip files which can contain hundreds of files 
including
xAml's and resource fiels (ripe for spidering).  The biggest point I took from 
the
videos was that ultimately it would be up to the search engine owners (google's,
live search, yahoo search) to work out how to get to the data, and more 
importantly
what constituted important data. I belive the Live Search team are work hard to
ensure our RIA's are given every advantage of making there way up the search 
page
ranks. It's now up to the other engines to acknowledge initiatives in this 
direction.
 As for what we designers/developers can do honestly I'll keep filling in the 
tag's,
layer names as I've always have.As for URL re-writing for internal navigation 
within
silverlight (where an external unique url adress that is spider friendly will 
get
you to the correct silverlight instance), I believe there are things out there 
that
can do this easily I've seen one solutiong BUT I believe it's too early to 
commit
to anything when the search engines themselves haven't acknowledged they are 
spidering
them.  It's going to be an interesting couple of years if it plays out in favour
of RIA over traditional sites!
 Jose Fajardo
 

----- Original Message ----- From: Jonathan Parker [mailto:[EMAIL PROTECTED]
Sent: 3/20/2008 12:07:15 AM To: [email protected] Subject: 
[OzSilverlight]
Silverlight SEO 

        Hi All, 

        

        I've been looking into SEO for Silverlight a bit and found
that it's not all as
easy as it is made out to be. 

        The issue is that in most apps the content (which is what
you want indexed) is
not part of the xaml. 

        This would be the case if you just had static content inside
the xaml but this
is rarely the case with silverilght. 

        

        The way that crawlers/spiders crawl content on dynamic sites
(e.g. content from
a DB) is by following links. 

        

        So although you can have hyperlinks in SL it's not really
the natural way to expose
content as it's easier to 

        download content via the network API vs. loading a whole new
dynamically generated
xaml page from the server. 

        

        So the question arises how does flash handle SEO. 

        

        Well apparantly there is some support. For example  
[http://www.google.com/search?hl=en&q=filetype%3Aswf]http://www.google.com/search?hl=en&q=filetype%3Aswf


        

        Now unless these sites are dynamically generating ( or more
likely just have a
custom handler for ) swf files then 

        this means swfs are "parsable" and
"indexable" though when it comes to exposing
the state of the swf with
deep links I don't think 

        that works out of the box. 

        

        So the question is how to implement SEO in general and deep
links in particular
for SL. 

        

        My current theory is that you could pass the url/query
string to SL by using the
initparams: 

        

        function createSilverlight() 

        { 

        Silverlight.createObject( 

        "plugin.xaml", // Source property value. 

        parentElement, // DOM reference to hosting DIV tag. 

        "myPlugin", // Unique plug-in ID value. 

        { // Plug-in properties. 

        width:'600', // Width of rectangular region of plug-in in pixels. 

        height:'200', // Height of rectangular region of plug-in in pixels. 

        version:'1.0' // Plug-in version to use. 

        }, 

        {
}, // No events defined -- use empty list. 

        "param1, param2"); // InitParams property value. 

        } 

        

        Then the SL app would use these to set the state ("navigate"
to specific content
or set the selected index in a list of content). 

        This would handle links from a search engine but you would
still have to generate
the urls and content for the spider to crawl. 

        This I think is done at the moment using the
"javascript" test. I.e. spiders don't
execute JS and therefore you
can take advantage

of that by displaying xml/html with content and links if JS is "disabled". 

        

        The first issue doesn't seem so difficult but the second one
I'm finding hard to
get my head around. 

        

        Any thoughts? 

        

        Jonathan Parker (MCTS - Web Applications) 

        Email: [EMAIL PROTECTED] 

        Blog: www.jonathanparker.com.au 

        
------------------------------------------------------------------- 
OzSilverlight.com
- to unsubscribe from this list, send a message back to the list with 
'unsubscribe'
as the subject.

Powered by mailenable.com - List managed by www.readify.net



------------------------------------------------------------------- 
OzSilverlight.com - to unsubscribe from this list, send a message back to the 
list with 'unsubscribe' as the subject.

Powered by mailenable.com - List managed by www.readify.net

Reply via email to