We want to support as many databases as we can. Right now we have BLOB data
type defined in our database schemas, both sqlite3 and mysql work fine with
Alembic migration tool on that data type, however it failed on Postgresql. How
should we handle that if we really want to use BLOB datatype?
It appears that, at least currently, Alembic only directly manages
tables (although I guess one could include SQL code in the
upgrade/downgrade functions to add/delete/change SPs, user defined
types, functions, etc.
Am I right in this? If so, do you have any plans or thoughts about
take a look at LargeBinary:
http://docs.sqlalchemy.org/en/rel_0_7/core/types.html#sqlalchemy.types.LargeBinary
On Oct 30, 2012, at 2:27 PM, junepeach wrote:
We want to support as many databases as we can. Right now we have BLOB data
type defined in our database schemas, both sqlite3 and
On Oct 30, 2012, at 2:39 PM, Don Dwiggins wrote:
It appears that, at least currently, Alembic only directly manages tables
(although I guess one could include SQL code in the upgrade/downgrade
functions to add/delete/change SPs, user defined types, functions, etc.
Am I right in this? If
The first beta release of the SQLAlchemy 0.8 series, 0.8.0b1, is released for
developer evaluation.
0.8 represents the latest series of refinements to the SQLAlchemy Core and ORM
libraries and features over 100 individual changes, consisting of major new
features, bug fixes, and other
Hi All.
I have a select query that uses subqueryload and looks like this:
completed_imports = self.ra_import_file.visible() \
.filter(ImportFile.lock_date == None) \
.filter(ImportFile.process_date != None) \
.order_by(ImportFile.process_date.desc())