Re: [sqlalchemy] contains_eager, outerjoin, subquery, alias, and loading nested relationships of the same type

2018-08-08 Thread Nick Repole
Hm, I'll spend some time playing around with enable_relationship_loading 
and see if I can produce what I'm looking for. Thanks again for the help!

On Tuesday, August 7, 2018 at 12:47:52 PM UTC-5, Mike Bayer wrote:
>
> On Mon, Aug 6, 2018 at 8:01 PM, Nick Repole  > wrote: 
> > Appreciate the response, that all makes sense to me. In hindsight, I'm 
> > wondering how I ever expected there to be two Album objects with the 
> same 
> > identifier and different relationship collections... 
> > 
> > On a conceptual level, I wonder how feasible it'd be to have objects 
> > expunged immediately as they're loaded. That way the parent Album with 
> > album_id==1 could load, have it's tracks load, then when the child album 
> is 
> > loaded it wouldn't be the same Album object despite being album_id==1. 
> > 
> > My use case is purely read only, so I'm not particularly concerned with 
> any 
> > changes being tracked/persisted,so such a scenario would accomplish what 
> I'd 
> > need. But I realize my use case is pretty niche. 
>
> well it would be that the objects aren't put into the session at all 
> as they are loaded, because you literally want the eager loading 
> within the same load to create a new Album object.   That wouldn't 
> work because the loading mechanism is continuously seeing the same 
> primary key at the top for that Album and pulling it from the identity 
> map per-row in order to implement the eager loading in the first 
> place. 
>
> if you want to rely upon lazy loading, that is possible.The 
> session supports lazy-loading on non-session-bound objects: 
>
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.enable_relationship_loading
>  
> . 
>
>
>
>
> > 
> > 
> > 
> > On Monday, August 6, 2018 at 9:58:33 AM UTC-5, Mike Bayer wrote: 
> >> 
> >> On Sat, Aug 4, 2018 at 11:27 PM, Nick Repole  
> wrote: 
> >> > Hi, 
> >> > 
> >> > I'm attempting to load some filtered relationships, and am running 
> into 
> >> > issues when trying to independently load the same relationship 
> multiple 
> >> > times. 
> >> > 
> >> > As an example, I'd like to load Album.tracks, and 
> >> > Album.tracks.album.tracks 
> >> > in different ways. In the first tracks relationship, I'm trying to 
> >> > include 
> >> > all tracks with track_id >= 4, and in the second relationship every 
> >> > trackId 
> >> > < 4. 
> >> > 
> >> > Is this possible? And if so, what load options or technique should I 
> be 
> >> > using? The query generated below contains all the information I'm 
> trying 
> >> > to 
> >> > get, it's a matter of loading that information into my models that's 
> >> > tripping me up. 
> >> > 
> >> > Complete/runnable example can be found attached or below: 
> >> 
> >> great test case.  What you're doing is not possible because you are 
> >> looking at the "album.tracks" collection in two different contexts, 
> >> and expecting them to have different contents, but this is the same 
> >> Album object and the same "tracks" collection: 
> >> 
> >> (Pdb) results[0] 
> >> <__main__.Album object at 0x7f2e1eea3898> 
> >> (Pdb) results[0].tracks[0].album 
> >> <__main__.Album object at 0x7f2e1eea3898> 
> >> (Pdb) results[0].tracks is results[0].tracks[5].album.tracks 
> >> True 
> >> 
> >> this is how the ORM identity map works, for any class / primary key, 
> >> there's only one of them. 
> >> 
> >> now since this is SQLAlchemy, there are ways to do this, but it means 
> >> you'd have two different Album objects with the same primary key in 
> >> play.   if you were to change both of them in a session and try to 
> >> flush, you will get conflicts and/or errors. 
> >> 
> >> Here's the quickest way, note thre's now Album.tracks.parent_album for 
> >> the real relationship: 
> >> 
> >> class Album(Base): 
> >> 
> >> __tablename__ = 'Album' 
> >> 
> >> album_id = Column("AlbumId", Integer, primary_key=True) 
> >> title = Column("Title", Unicode(160), nullable=False) 
> >> tracks = relationship("Track", backref="parent_album") 
> >> 
> >> 
> >> class SubAlbum(Album): 
> >> pass 
> >> 
> >> 
> >> class Track(Base): 
> >> 
> >> __tablename__ = 'Track' 
> >> 
> >> track_id = Column("TrackId", Integer, primary_key=True) 
> >> name = Column("Name", Unicode(200), nullable=False) 
> >> album_id = Column("AlbumId", ForeignKey('Album.AlbumId'), 
> index=True) 
> >> album = relationship('SubAlbum') 
> >> 
> >> 
> >> query = session.query(Album).filter(Album.album_id == 1) 
> >> track_1_alias = aliased(Track, name="Track1") 
> >> album_2_alias = aliased(SubAlbum, name="Album2") 
> >> track_2_alias = aliased(Track, name="Track2") 
> >> 
> >> then your assertion passes. 
> >> 
> >> 
> >> as you can see it is not possible for album<->track to be a 
> >> bidirectional relationship and refer to two different Album objects at 
> >> the same time.  you have to choose one or the other. 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> > 
> >> > from 

Re: [sqlalchemy] BinaryExpression type inference for external dialects

2018-08-08 Thread Wis Kojohnjaratkul
Thanks for your clarifying response. I wasn't quite sure before whether I 
was using the API correctly or not. And yes your assumption would be 
correct, we indeed intend to inform users to only use the datatypes 
imported from our dialect.

On Wednesday, August 8, 2018 at 2:15:43 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Aug 8, 2018 at 4:31 PM, Wis Kojohnjaratkul  > wrote: 
> > Hi, 
> > 
> > I'm working on the Teradata external dialect and I want to be able to 
> infer, 
> > at the SQLAlchemy level, the expression type for binary operations 
> > (specifically arithmetic) that involve types supported by the dialect. 
> The 
> > goal is to get them to match the types of binary expressions in 
> Teradata. 
> > 
> > For example, if we had the following Table where SMALLINT and INTEGER 
> are in 
> > sqlalchemy_teradata: 
> > 
> > table = Table('table_test', metadata, 
> >  Column('c1', SMALLINT), 
> >  Column('c2', INTEGER)) 
> > 
> > If we get the type from a BinaryExpression: 
> > 
> > print(type((table.c.c1 + table.c.c2).type)) 
> > 
> > We get: 
> > 
> >  
> > 
> > But if we were to reflect a table with the same operator and operands: 
> > 
> > engine.execute( 
> > 'CREATE Table t1 as (' 
> > 'SELECT c1 + c2 as sum_c1_c2 from table_test)') 
> > 
> > t = Table('t1', metadata, autoload=True) 
> > 
> > print(type(t.columns['sum_c1_c2'].type)) 
> > 
> > We would instead get: 
> > 
> >  
> > 
> > Which is what we expect from Teradata. So we would like: 
> > 
> > print(type((table.c.c1 + table.c.c2).type)) 
> > 
> > to output: 
> > 
> >  
> > 
> > It looks like providing your own Comparator (comparator_factory) class 
> is 
> > the way to designate the resulting type of a BinaryExpression. I am able 
> to 
> > override this class in the relevant data types and then define my own 
> > specific type inference behavior in the Comparator's _adapt_expression() 
> > method. I'm playing around with something basic like the following for a 
> > mixin for all the types implemented by the dialect (inspired by the 
> > Comparator in _LookupExpressionAdapter): 
> > 
> > class _TDComparable: 
> > 
> > @property 
> > def _expression_adaptations(self): 
> > raise NotImplementedError() 
> > 
> > class Comparator(TypeEngine.Comparator): 
> > _blank_dict = util.immutabledict() 
> > 
> > def _adapt_expression(self, op, other_comparator): 
> > othertype = other_comparator.type.__class__ 
> > lookup = self.type._expression_adaptations.get( 
> > op, self._blank_dict).get( 
> > othertype, self.type) 
> > return (op, lookup) 
> > 
> > comparator_factory = Comparator 
> > 
> > and then in a derived type like VARCHAR I can do something like: 
> > 
> > class VARCHAR(_TDComparable, sqltypes.VARCHAR): 
> > 
> > def __init__(self, length=None, charset=None, **kwargs): 
> > super(VARCHAR, self).__init__(length=length, **kwargs) 
> > self.charset = charset 
> > 
> > @property 
> > def _expression_adaptations(self): 
> > return { 
> > operators.concat_op: { 
> > CLOB: CLOB 
> >} 
> > } 
> > 
> > As far as I can tell, this is one of the more straightforward ways to 
> > accomplish what I want. 
> > 
> > I've noticed though that most of the keys and values in 
> > _expression_adaptations for the types defined in sqltypes seem to be 
> generic 
> > types (probably because most of the underlying logic for type inference 
> > revolves around the more general _type_affinity?). My approach here is 
> > slightly different, and more specific. So maybe I should define my own 
> > private method/property for each of my dialect-implemented type instead 
> of 
> > using _expression_adaptations? Is what I'm doing within the full support 
> of 
> > SQLAlchemy? Is there a better way to accomplish what I want? 
> > 
> > Either way, any insight into this and/or suggestions of what I could do 
> > would be greatly appreciated. 
>
> it seems like you're using the API correctly, the 
> _expression_adaptations name is specific to a mixin called 
> _LookupExpressionAdapter, which it seems like you've re-implemented 
> here, so that is fine.   these are the hooks that are specifically to 
> control the output type of a binary expression.I would assume your 
> users will be instructed to make explicit use of these datatypes. 
>
>
>
> > 
> > Thanks, 
> > Wis 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email 

Re: [sqlalchemy] BinaryExpression type inference for external dialects

2018-08-08 Thread Mike Bayer
On Wed, Aug 8, 2018 at 4:31 PM, Wis Kojohnjaratkul  wrote:
> Hi,
>
> I'm working on the Teradata external dialect and I want to be able to infer,
> at the SQLAlchemy level, the expression type for binary operations
> (specifically arithmetic) that involve types supported by the dialect. The
> goal is to get them to match the types of binary expressions in Teradata.
>
> For example, if we had the following Table where SMALLINT and INTEGER are in
> sqlalchemy_teradata:
>
> table = Table('table_test', metadata,
>  Column('c1', SMALLINT),
>  Column('c2', INTEGER))
>
> If we get the type from a BinaryExpression:
>
> print(type((table.c.c1 + table.c.c2).type))
>
> We get:
>
> 
>
> But if we were to reflect a table with the same operator and operands:
>
> engine.execute(
> 'CREATE Table t1 as ('
> 'SELECT c1 + c2 as sum_c1_c2 from table_test)')
>
> t = Table('t1', metadata, autoload=True)
>
> print(type(t.columns['sum_c1_c2'].type))
>
> We would instead get:
>
> 
>
> Which is what we expect from Teradata. So we would like:
>
> print(type((table.c.c1 + table.c.c2).type))
>
> to output:
>
> 
>
> It looks like providing your own Comparator (comparator_factory) class is
> the way to designate the resulting type of a BinaryExpression. I am able to
> override this class in the relevant data types and then define my own
> specific type inference behavior in the Comparator's _adapt_expression()
> method. I'm playing around with something basic like the following for a
> mixin for all the types implemented by the dialect (inspired by the
> Comparator in _LookupExpressionAdapter):
>
> class _TDComparable:
>
> @property
> def _expression_adaptations(self):
> raise NotImplementedError()
>
> class Comparator(TypeEngine.Comparator):
> _blank_dict = util.immutabledict()
>
> def _adapt_expression(self, op, other_comparator):
> othertype = other_comparator.type.__class__
> lookup = self.type._expression_adaptations.get(
> op, self._blank_dict).get(
> othertype, self.type)
> return (op, lookup)
>
> comparator_factory = Comparator
>
> and then in a derived type like VARCHAR I can do something like:
>
> class VARCHAR(_TDComparable, sqltypes.VARCHAR):
>
> def __init__(self, length=None, charset=None, **kwargs):
> super(VARCHAR, self).__init__(length=length, **kwargs)
> self.charset = charset
>
> @property
> def _expression_adaptations(self):
> return {
> operators.concat_op: {
> CLOB: CLOB
>}
> }
>
> As far as I can tell, this is one of the more straightforward ways to
> accomplish what I want.
>
> I've noticed though that most of the keys and values in
> _expression_adaptations for the types defined in sqltypes seem to be generic
> types (probably because most of the underlying logic for type inference
> revolves around the more general _type_affinity?). My approach here is
> slightly different, and more specific. So maybe I should define my own
> private method/property for each of my dialect-implemented type instead of
> using _expression_adaptations? Is what I'm doing within the full support of
> SQLAlchemy? Is there a better way to accomplish what I want?
>
> Either way, any insight into this and/or suggestions of what I could do
> would be greatly appreciated.

it seems like you're using the API correctly, the
_expression_adaptations name is specific to a mixin called
_LookupExpressionAdapter, which it seems like you've re-implemented
here, so that is fine.   these are the hooks that are specifically to
control the output type of a binary expression.I would assume your
users will be instructed to make explicit use of these datatypes.



>
> Thanks,
> Wis
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To 

[sqlalchemy] BinaryExpression type inference for external dialects

2018-08-08 Thread Wis Kojohnjaratkul
Hi,

I'm working on the Teradata external dialect and I want to be able to 
infer, at the SQLAlchemy level, the expression type for binary operations 
(specifically arithmetic) that involve types supported by the dialect. The 
goal is to get them to match the types of binary expressions in Teradata.

For example, if we had the following Table where SMALLINT and INTEGER are 
in sqlalchemy_teradata:

table = Table('table_test', metadata,
 Column('c1', SMALLINT),
 Column('c2', INTEGER))

If we get the type from a BinaryExpression:

print(type((table.c.c1 + table.c.c2).type))

We get:



But if we were to reflect a table with the same operator and operands:

engine.execute(
'CREATE Table t1 as ('
'SELECT c1 + c2 as sum_c1_c2 from table_test)')

t = Table('t1', metadata, autoload=True)

print(type(t.columns['sum_c1_c2'].type))

We would instead get:



Which is what we expect from Teradata. So we would like:

print(type((table.c.c1 + table.c.c2).type))

to output:



It looks like providing your own Comparator (comparator_factory) class is 
the way to designate the resulting type of a BinaryExpression. I am able to 
override this class in the relevant data types and then define my own 
specific type inference behavior in the Comparator's _adapt_expression() 
method. I'm playing around with something basic like the following for a 
mixin for all the types implemented by the dialect (inspired by the 
Comparator in _LookupExpressionAdapter):

class _TDComparable:

@property
def _expression_adaptations(self):
raise NotImplementedError()

class Comparator(TypeEngine.Comparator):
_blank_dict = util.immutabledict()

def _adapt_expression(self, op, other_comparator):
othertype = other_comparator.type.__class__
lookup = self.type._expression_adaptations.get(
op, self._blank_dict).get(
othertype, self.type)
return (op, lookup)

comparator_factory = Comparator

and then in a derived type like VARCHAR I can do something like:

class VARCHAR(_TDComparable, sqltypes.VARCHAR):

def __init__(self, length=None, charset=None, **kwargs):
super(VARCHAR, self).__init__(length=length, **kwargs)
self.charset = charset

@property
def _expression_adaptations(self):
return {
operators.concat_op: {
CLOB: CLOB
   }
}

As far as I can tell, this is one of the more straightforward ways to 
accomplish what I want.

I've noticed though that most of the keys and values in 
_expression_adaptations for the types defined in sqltypes seem to be 
generic types (probably because most of the underlying logic for type 
inference revolves around the more general _type_affinity?). My approach 
here is slightly different, and more specific. So maybe I should define my 
own private method/property for each of my dialect-implemented type instead 
of using _expression_adaptations? Is what I'm doing within the full support 
of SQLAlchemy? Is there a better way to accomplish what I want?

Either way, any insight into this and/or suggestions of what I could do 
would be greatly appreciated.

Thanks,
Wis

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.