did u implement Crawl Filter for your GWTP?
do u have these lines in your Guice package?
bindConstant().annotatedWith(ServiceKey.class).to("123456");
bindConstant().annotatedWith(ServiceUrl.class).to("http://crawlservice.appspot.com/");
filter("/*").through(CrawlFilter.class);
On Monday, May 19, 2014 10:14:21 PM UTC+10, Joseph Lust wrote:
>
> Tom,
>
> To assuage your healthy skepticism, get a Googlw Webmaster Tools account,
> add your site, and then use the* Crawl > Fetch as Google option*. I am
> able to feed it my GWT home page and sub pages (based on history tokens in
> the URL) and the returned page is the page as expected, filled with the
> expected content. Check it out for yourself.
>
> As to cloaking, I'm not sure what Google does. The obvious step would be
> to intermittently visit the page with a user agent / IP block that was not
> identifiable as GoogleBot. Then correlate those results with the results
> received when you really were the declared GoogleBot, and if there was a
> notable dependency flag and blacklist the site. However, this would mean
> agent misrepresentation and potentially going against the robots.txt, which
> would potentially be *evil* and not something Google would do openly.
> Probably it would make more sense for higher traffic sites that are flagged
> for a higher risk of spam/malware by other heuristics.
>
> Sincerely,
> Joseph
>
--
You received this message because you are subscribed to the Google Groups
"Google Web Toolkit" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/google-web-toolkit.
For more options, visit https://groups.google.com/d/optout.