I have a couple of patches to to 1.25, which will work around this
problem.

The real problem is that the current implementation writes the entire
datastore to
disk, after each put, delete, or transaction.   One tradeoff you can
make is,  "not" to
guarrantee the transactions, by persisting to disk every time.  You
can do the disk
load once, at the beginning; and the disk save on exit (^c trap).
This works surprisingly
well for development, and the datastore can be as big as your RAM
allows ;-)

Here are the patches:

*** ../google_appengine_1.25/google/appengine/api/
datastore_file_stub.py        2009-09-03 12:17:03.000000000 -0600
--- lib/internal/google_appengine/google/appengine/api/
datastore_file_stub.py   2009-10
-02 15:12:37.000000000 -0600
***************
*** 537,543 ****
        self.__entities_lock.release()

      if not put_request.has_transaction():
!       self.__WriteDatastore()

      put_response.key_list().extend([c.key() for c in clones])

--- 537,544 ----
        self.__entities_lock.release()

      if not put_request.has_transaction():
!         #self.__WriteDatastore()
!         pass

      put_response.key_list().extend([c.key() for c in clones])

***************
*** 578,584 ****
            pass

          if not delete_request.has_transaction():
!           self.__WriteDatastore()
      finally:
        self.__entities_lock.release()

--- 579,586 ----
            pass

          if not delete_request.has_transaction():
!             #self.__WriteDatastore()
!             pass
      finally:
        self.__entities_lock.release()

***************
*** 814,820 ****
        self.__query_history[clone] += 1
      else:
        self.__query_history[clone] = 1
!     self.__WriteHistory()

      cursor = _Cursor(results, query.keys_only())
      self.__queries[cursor.cursor] = cursor
--- 816,822 ----
        self.__query_history[clone] += 1
      else:
        self.__query_history[clone] = 1
*** 872,878 ****

      self.__tx_snapshot = {}
      try:
!       self.__WriteDatastore()
      finally:
        self.__tx_lock.release()

--- 874,881 ----

      self.__tx_snapshot = {}
      try:
!         #self.__WriteDatastore()
!         pass
      finally:
        self.__tx_lock.release()
!     #self.__WriteHistory()

      cursor = _Cursor(results, query.keys_only())
      self.__queries[cursor.cursor] = cursor
***************

*** ../google_appengine_1.25/google/appengine/tools/
dev_appserver_main.py   2009-09-03 12:17:08.000000000 -0600
--- lib/internal/google_appengine/google/appengine/tools/
dev_appserver_main.py   2009-10-05 11:22:29.000000000 -0600
***************
*** 487,493 ****
--- 487,497 ----
        logging.error('Error encountered:\n%s\nNow terminating.',
info_string)
        return 1
    finally:
+     datastore = dev_appserver.apiproxy_stub_map.apiproxy.GetStub
('datastore_v3')
+     logging.info('Saving datastore')
+     datastore.Write()
      http_server.server_close()
+

    return 0

On Oct 12, 11:40 am, Rodrigo Moraes <[email protected]> wrote:
> On Sun, Oct 11, 2009 at 2:06 PM, (jpgerek) wrote:
> > Does anyone have a solution or workaround apart from working with a
> > reduced data set ?
>
> There's nothing to do with the current implementation which is indeed
> very simple and can't handle large data sets (everything is loaded
> into memory and there's no indexing at all, afaik). The alternative
> would be to set an environment with a real database. I'm afraid
> however that the current options are at a very early stage of
> development. One recently announced was TyphoonAE, which uses mongoDB
> for the database backend:
>
> http://code.google.com/p/typhoonae/
>
> I never tested it myself, though.
>
> -- rodrigo
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to