It looks like your routes.py file is mixing the pattern-based and 
parameter-based systems, which isn't allowed.

Anyway, are you saying that http://www.yourdomain.com/robots.txt does show 
the robots.txt file in the browser? If so, then I believe it should work. 
However, I'm not sure Google will check for it/update it on every single 
request -- you may have to wait a bit for it to take effect.

Anthony

On Monday, May 6, 2013 1:41:19 PM UTC-4, Andriy wrote:
>
> Maybe I did it wrong because I now still see sessions created from 
> 66.249.73... which is Google-bot.
>
> I only changed url in the line:
> ('/robots.txt', '/examples/static/robots.txt'),
> to:
> ('/robots.txt', '/myAppName/static/robots.txt'),
>
> In robots.txt I exclude all bots from all links:
> *User-agent: *
> Disallow: /*
>
> I checked, and my_Site_Name/robots.txt opens in browser successfully. 
> So I do not know why it does not work.
>
> I added this lines to routes.py to remove app name from urls (can this be 
> the reason?):
> routers = dict(
> BASE = dict(
>   default_application = 'myAppName'
> )
> )
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to