i fixed the problem by upgrading to SA 0.5.2 (minor tweaks from 0.4.4
-- about 10 min total to get app upgraded)
then i used lasizoillo's solution:
class Foo(object):
@classmethod
def getById(self, id):
return meta.Session.query(self).get(id)
@classmethod
def findAll(self):
return meta.Session.query(self).all()
def save(self, auto_commit=False):
meta.Session.add(self)
if auto_commit:
meta.Session.commit()
Thanks all!
On Jan 28, 3:40 pm, Mike Orr <[email protected]> wrote:
> On Wed, Jan 28, 2009 at 2:04 PM, John Brennan <[email protected]> wrote:
>
> > Unfor. due to the nature of the project I cannot simply upgrade SA and
> > Pylons (as much as I'd like to).
>
> > I see the reason for having some context outside the model for
> > transactional based approaches (like a banking system), but even then
> > I would move stuff out of a controller and into a compact library or
> > higher class model. This is one aspect that other frameworks get
> > right imho.
>
> > For example, I had to modify the "Book" model after 8 months after it
> > was originally written to allow soft purging (that is when a user
> > deletes a book, it actually just marks it instead of deleting it..
> > transparent to the user though.) This required a SIG amount of time
> > because I had to add ...filter_by(purge_fl==True)... to every location
> > where I'm doing a meta.Session.query(Book).filter... let's just say
> > it was pretty painful!! That was over a year ago now and the Pylons
> > doc has improved a lot (was nearly non existent).
>
> I can understand your desire to make the controllers
> database-independent. The easiest answer is high-level access
> methods, which you can implement as class methods in the ORM objects,
> model functions, or model classes distinct from the ORM classes.
> However, at some level you have to decide which parts of the database
> code to expose. For instance, if you hide the Session but expose the
> ORM objects, you'll have to decree an interface that the controller
> should access only the object's attributes and assume that a different
> object class might be used later. Or if you're using SQL-level
> results, do you expose the row objects or copy them into native Python
> dicts? Row object and Result objects behave almost, but not quite,
> like ordinary Python data structures. For instance, you can't add or
> modify attributes in result rows, though in an ordinary Python dict
> you could put temporary data (calculated values) there for the View to
> use.
>
> I did make one application that needed an abstract model because we
> weren't sure whether Durus or SQLite or MySQL would be the most
> effective for deployment, and didn't want to box ourselves into a
> database that may turn out to be unusable.
>
> (The application has to run standalone on workstations as well as a
> central web site. This gets into what works on small-memory PCs, is
> cross-platform, easy for users to install, has no incompatible
> licensing restrictions, has sufficient speed with our dataset, and
> doesn't crash. Those were questions we couldn't definitively answer
> during the initial design stage. Previous testing had shown SQLite to
> be unusable because it hung on queries, so we used Durus in the legacy
> app. This year's testing showed SQLite to be the most robust and have
> the best Windows support, while MySQL was unsuitable because of its
> restrictions on blob sizes unless you use an esoteric column type
> which requires customization in SQLAlchemy.)
>
> So I made a generic backend base class with high-level methods for all
> the data-access operations, and two subclasses for Durus and
> SQLAlchemy. The db is read-only in production so I didn't have to
> deal with data-modification code or advanced Session use. The
> existing db was Durus so I already had the data classes. Because the
> data contains lots of columns and nested structures, in SQLAlchemy I
> just picked the entire object into a BLOB, and added a few redundant
> SQL columns for the most common search fields.
>
> My config file has the following variables:
>
> # Location of XXX database.
> #backend = durus
> backend = sqlalchemy
> db_dir = /home/mso/XXX/trunk/production
> durus_db = %(db_dir)s/XXX.durus
> sqlalchemy_url = sqlite:///%(db_dir)s/XXX.sqlite
>
> Environment.py contains the following:
>
> # Initialize the database (``g.backend`` variable)
> try:
> g.backend = backends.get_backend_from_config(config, False)
> except errors.UnknownBackendError:
> lint.error("backend", "unknown backend implementation")
> if not config['standalone']: # don't do this in the standalone
> log.debug("precaching search data...")
> tc = timecounter.TimeCounter(log.debug)
> g.backend.precache()
> tc.checkpoint("finished precaching search data")
>
> I ended up using a separate variable to choose Durus or SQLAlchemy.
> The other approch would be to define a URL scheme "durus:", which the
> factory function would intercept and instantiate the Durus backend
> rather than the SQLAlchemy backend.
>
> My base class is structured like this:
>
> class Backend(object):
> # No __init__ because it's backend specific, but the first arg
> should be the database URL or path.
>
> def get_material(self, key):
> """Return one of the object types by primary key, or None if none."""
>
> def iter_materials(self):
> """Yield every record of a certain type. This is a generic
> routine for miscellaneous usage."""
>
> def search_name(self, name):
> """Do an efficient name search and yield matching objects.
> Note how high-level this is, to give the
> backend the maximum freedom to implement this in its own way.
> """
>
> def precache(self):
> """Cache data in memory to speed up searches. Again this
> gives the backend complete freedom to
> do whatever it wishes, or even nothing if no caching is needed.
> """
>
> def cleanup_request(self):
> """Called by the base controller at the end of every request.
> In SQLAlchemy it calls
> meta.Session.remove(). In Durus it does nothing.
> """
>
> I also needed a separate routine to create the database from various
> CSV files. This is called from "paster setup-app" or a utility
> program. The base class looks like this:
>
> class ImportBackend(object):
> # No __init__ because it's backend specific, but the first arg is
> the database URL or path.
>
> def begin(self):
> """Open the database and do whatever initialization is necessary."""
>
> def set_chemicals(self, chemicals):
> """Put the first table in the database. ``chemicals`` is an
> iterable of Chemical objects. It's up to the
> backend whether to actually write these to the database now,
> or to save them in memory until .finish()
> is called. The latter may be necessary if these objects
> interact with other objects which haven't been
> set yet.
> """
>
> def set_react_groups(self, react_groups):
> """Put the second table into the database."""
>
> def finish(self):
> """Finish the import and close the database."""
>
> --
> Mike Orr <[email protected]>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"pylons-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---