I just tried a revised version of the cache consumer method as follows:

    def Record_entries_count(self):
        # import pdb; pdb.set_trace()
        db = cherrypy.request.db_session
        query_subset = db.query(MyClass).merge_result(self.
search_result_cache)
        result = query_subset.count()
        return result

However, this gives me the following error:

AttributeError: 'list_iterator' object has no attribute 'count'


Why does `merge_result` return a list_iterator instead of the Query object 
itself? How can I perform a .count() method in this case? Thanks.



On Thursday, July 13, 2017 at 4:38:19 PM UTC-7, Jinghui Niu wrote:
>
> I have a web application served by cherrypy (, which is multi-threaded. ) 
>
> I'm trying to cache a set of rows queried from database using 
> `self.search_result_cache` variable on the GUI_Server object. On my 
> front-end, the web first request `list_entries` to prepare the rows and 
> stores them on `self.search_result_cache`. After that, on user's mouse 
> click the front-end initiats another request calling 
> `Record_entries_count`, which is expected to revive the Query from 
> `self.search_result_cache` and continue on to do some data refining, e.g. 
> summing up the count in this case.
>
> class GUI_Server:
>
>
>     def __init__(self):
>         self.search_result_cache = None
>
>
>     @cherrypy.expose
>     def list_entries(self, **criteriaDICT):
>         # always store the result to self cache
>         
>         ...
>
>
>         db = cherrypy.request.db_session
>
>
>         filter_func = getattr(self, 'filterCriteria_' + classmodel_obj.
> __name__)
>         queryOBJ = filter_func(criteriaDICT, queryOBJ)
>         self.search_result_cache = queryOBJ
>         db.expunge_all()
>
>     ....
>
>     def Record_entries_count(self):
>         db = cherrypy.request.db_session
>         query_subset = self.search_result_cache
>         result = query_subset.count()
>         return result
>
>
> But this doesn't work. It always give me an error:
>
> sqlite3.ProgrammingError: SQLite objects created in a thread can only be used 
> in that same thread.The object was created in thread id 139937752020736 and 
> this is thread id 139938238535424 
>
> I am already using `scoped_session` for each request session. I don't 
> understand why I got this error.
>
>
> What is the best pratice to cache queried result across different request 
> sessions like this? Thanks a lot.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to