in the past we've made TypeDecorators that can dynamically resolve  
themselves to different dialect-specific types - the Interval type is  
an example of this.

So you'd want to implement load_dialect_impl() to do this.  Also you  
want to make "impl = TypeEngine" - you're doing the pickling already  
and its going to want "impl" to be a non-TypeDecorator class  
(PickleType is a TypeDecorator itself).  You can subclass PickleType  
directly to get at its copy_value()/compare_values()/is_mutable()  
methods.

Take a look at Interval.load_dialect_impl() to see what it does - just  
basically look at the dialect, and if its a mysql.MySQLDialect, return  
MSMediumBlob, otherwise return Binary.



On Nov 13, 2008, at 2:44 PM, Mike Orr wrote:

>
> I have a 113656-byte pickle I'm trying to put into a blob column in a
> way that will work for both SQLite and MySQL.  SQLite has no problem
> with it, but in MySQL I have to use the MSMediumBlob type because it
> exceeds 65536 bytes.  But I'd like the same table to work with both
> engines.  Is this possible?
>
> I'm using a CompressedPickle class that looks like this:
>
> class CompressedPickle(sa.types.TypeDecorator):
>    impl = sa.types.PickleType
>
>    def process_bind_param(self, value, dialect):
>        value = pickle.dumps(value, -1)
>        value = zlib.compress(value, 9)
>        return value
>
>    def process_result_value(self, value, dialect):
>        value = zlib.decompress(value)
>        value = pickle.loads(value)
>        return value
>
>    def copy(self):
>        return CompressedPickle(self.impl.length)
>
>
>
>
> -- 
> Mike Orr <[EMAIL PROTECTED]>
>
> >


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to