Maybe in 2.0 what we do is check for the bot and then switch from server to client state. But even then the problem is that urls are not stable, they are still session relative.
The workaround is simple but is a pain to implement. Basically you have to use only bookmarkable pages and bookmarkable links for the pages you want to be indexed. This means you are not using wicket's session handling and intstead yourself encoding state into the url just like you would with webwork. In 2.0 we have stateless forms so you can also perform POSTs. However even if you do this wicket will still create a session upon first request. This may or may not be resolved in the 2.0 frame. jsessionid is a well known variable and im sure googlebot is smart enough to know what it is, if not - well then you cannot use wicket if you want your site to be crawled by google.
Now isnt there a file that you can serve to the google bot to tell it how to crawl your site? Which urls to hit, etc? If this is true then you can create a bookmarkable "gateway" page to the rest of your application.
-Igor
On 6/7/06, John Patterson <[EMAIL PROTECTED]> wrote:
On 7 Jun 2006, at 17:32, Eelco Hillenius wrote:Well, people can still argue whether it is in Google's way or not. SeeI have read quotes on Matt Cutts blog that session ids should be avoided. Also quotes from GoogleGuy such as these:"Google can do some smart stuff looking for duplicates, and sometimes inferring about the url parameters, but in general it's best to play it safe and avoid session-ids whenever you can.""I've been aching for a long time to mention somewhere official that sites shouldn't use '&id=' as a parameter if they want maximal Googlebot crawlage, for example. So many sites use '&id=' with session IDs that Googlebot usually avoids urls with that parameter.However, if you plan tocode your whole application like that, you should consider whetherWicket is the right framework for you, as having stateful componentsis the big idea of the framework.Are you suggesting that if you want Google to crawl your site you should use another framework? I was hoping for a suggested workaround! Currently I do use Webwork for the pages in my site that need to be well indexed by search engines but I would love to use Wicket for the whole thing.Would it not be possible to have some kind of method like Session.shouldPersist() that defaults to true but could be overriden by the developer?
_______________________________________________
Wicket-user mailing list
Wicket-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wicket-user
_______________________________________________ Wicket-user mailing list Wicket-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/wicket-user