When Ryan mentioned "derived queries" he meant, use new queries that
will pick up where the last query left off (also called paging as you
noted). Joe Gregorio wrote an article on it which might also help:

http://code.google.com/appengine/articles/paging.html

This article describes a few different approaches you can use, one of
which is using __key__ comparisons.

Happy coding,

Jeff

On Apr 2, 9:09 am, 秦锋 <[email protected]> wrote:
> Hi all:
> I'm tried to read all records and find tags of them, since my records
> are larger than 1000(>4000 actually), and I know fetch has 1000 limit,
> thus I tried iterator, but got failure either.
> Following is my code:
>
> class UpdateTag(webapp.RequestHandler):
>   def get(self):
>     if 0 == len(self.request.get('begin')):
>       begin = datetime.datetime.strptime("197001010000", "%Y%m%d%H%M
> %S")
>     else:
>       begin = datetime.datetime.strptime(self.request.get('begin'), "%Y
> %m%d%H%M%S")
>     if 0 == len(self.request.get('end')):
>       end = datetime.datetime.now()
>     else:
>       end = datetime.datetime.strptime(self.request.get('end'), "%Y%m%d
> %H%M%S")
>     query = statsdb.Record.all().filter("inputtime >=", begin).filter
> ("inputtime <=", end)
>     self.response.headers['Content-Type'] = 'text/plain'
>
>     tagsName = []
>     i = 0
>     for record in query:
>       tagsName = tagsName + list(set(record.tags)-set(tagsName))
>       i+=1
>     if 0 == len(tagsName):
>       self.response.out.write("No records counted.\n")
>     else:
>       self.response.out.write(str(i) + " records counted.\n")
>
>     tags = {}
>     for tagName in tagsName:
>       tag = statsdb.Tags(key_name = tagName, name = tagName)
>       bookmark = None
>       pagesize = 100
>       while True:
>         records, bookmark = statsdb.GetRecords([tagName], bookmark,
> pagesize)
>         if bookmark is None:
>           tag.refCount += len(records)
>           break
>         else:
>           tag.refCount += pagesize
>       tags[tagName] = tag
>     batch = 100
>     if len(self.request.get('batch')) > 0:
>       batch = int(self.request.get('batch'))
>     statsdb.DBPut(tags.values(), batch)
>     self.response.out.write(str(len(tags)) + " tags counted.\n")
>
> And this is error:
> <pre>Traceback (most recent call last):
>   File &quot;/base/python_lib/versions/1/google/appengine/ext/webapp/
> __init__.py&quot;, line 501, in __call__
>     handler.get(*groups)
>   File &quot;/base/data/home/apps/cndata4u/1.332506907084827982/
> dbmaint.py&quot;, line 26, in get
>     for record in query:
>   File &quot;/base/python_lib/versions/1/google/appengine/ext/db/
> __init__.py&quot;, line 1468, in next
>     return self.__model_class.from_entity(self.__iterator.next())
>   File &quot;/base/python_lib/versions/1/google/appengine/api/
> datastore.py&quot;, line 1549, in next
>     self.__buffer = self._Next(self._BUFFER_SIZE)
>   File &quot;/base/python_lib/versions/1/google/appengine/api/
> datastore.py&quot;, line 1538, in _Next
>     raise _ToDatastoreError(err)
>   File &quot;/base/python_lib/versions/1/google/appengine/api/
> datastore.py&quot;, line 1965, in _ToDatastoreError
>     raise errors[err.application_error](err.error_detail)
> Timeout: datastore timeout: operation took too long.
> </pre>
>
> I know there are some way to page records, but only apply I have no
> extra inequality in property since __key__ will occur that. And in
> this case I have already an inequality, how to proceed?
> I don't understand a little bit about method 
> here:http://groups.google.com/group/google-appengine/browse_thread/thread/...
> and what's meaning about "derived queries"?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to