On Apr 27, 6:07 pm, howesc <[email protected]> wrote:
> another trick that web2py makes real easy, is make sure that each page
> has a unique URL by using request.args.  your url might look like:
>
> http://www.foo.com/default/index/43576/Image-title-here/another-thing
>
> where 43576 in the above URL is the ID (like in massimo's example),
> but the other parts are never used by the app,

.... see http://web2py.com/book/default/section/4/2?search=URL

request.env.http_host == http://www.foo.com
request.controller == default   # i.e. default.py
request.function == index  # i.e. index()
request.args == ['43576', 'image-title-here', 'antother-thing' ]
request.vars == {}

It is up to your application, specifically to the controller called to
process  request.args[] and request.vars{} as appropriate.
It is certainly not accurate to say "the other parts are never used by
the app".   They may not be used by YOUR app, but that would be your
specific way of using them (i.e.  ignoring them, not handling them).

Just want to be sure others are not confused / misled.

Regards,
- Yarko

>  but to google they look
> like part of the URL that it indexes, and it will request each page
> separately, thereby getting the unique page keywords.
>
> i'm no expert, so this might be a bad idea, but what about a hidden
> div on the page with comment content?  i don't know if the search
> engine parses the css to know that the content is not visible to the
> user.
>
> good luck,
>
> cfh
>
> On Apr 26, 6:33 pm, mdipierro <[email protected]> wrote:
>
> > say you have:
>
> > db.define_table('paper',Field('image','upload'))
> > db.define_table('tag',Field('paper',db.paper),Field('keyword'))
>
> > then you will have an action like:
>
> > def index():
> >      paper=db.paper[request.args(0)]
> >      response.meta.keywords=','.join([tag.keyword for tag in
> > db(db.tag.paper==paper.id).select()])
> >      return
> > dict(img=IMG(_src=URL(r=request,f='download',args=paper.image)))
>
> > On Apr 26, 7:23 pm, Al <[email protected]> wrote:
>
> > > Thank you for all the comments...
> > > The web site is just a few hundreds of SCANNED image of verd old
> > > medical papers which can be searched by two database fields - Title
> > > and Keywords, so essentially it is just one web page with not much to
> > > be indexed on. There is also 'comments' people can add to each
> > > article, but these comments are also stored in the DB. So I must find
> > > a way to persist the data in these 3 searchable fields so that they
> > > can be crawled by the search engine, I am not sure if
> > > "response.meta.keyword=...." can do such job. The keyword field will
> > > be continuously updated - not static - so I cannot put all the
> > > keywords into the meta descriptions beforehand.
>
> > > Al
>
> > > --
> > > Subscription 
> > > settings:http://groups.google.com/group/web2py/subscribe?hl=en

Reply via email to