Re: [sqlalchemy] Abstract Table Concrete Inheritance. Could not determine join condition between parent/child tables

2017-05-15 Thread Никита Крокош
So, how should i do if I need many encumbrances for every cadastral object 
(flat.encumrances, building.encumbrances, ...etc).

If I get it correctly, I need something like:

class iCadastralObject(Base):

__abstract__ = True
def __init__(self, cadastral_region, cadastral_district, cadastral_block, 
cadastral_object):
self.cadastral_region = cadastral_region
self.cadastral_district = cadastral_district
self.cadastral_block = cadastral_block
self.cadastral_object = cadastral_object

# this is a combined PK
cadastral_region = Column(Integer, primary_key=True, index=True)
cadastral_district = Column(Integer, primary_key=True, index=True)
cadastral_block = Column(Integer, primary_key=True, index=True)
cadastral_object = Column(Integer, primary_key=True, index=True)

@declared_attr
def encumbrances(self):
return relationship("iEncumbrance")

class Building(iCadastralObject):
__tablename__ = 'building'

class Flat(iCadastralObject):
__tablename__ = 'flat'

class Construction(iCadastralObject):
__tablename__ = 'construction'

class iEncumbrance(Base):
__tablename__ = 'iencumbrance'

id = Column(Integer, primary_key=True, index=True)

def __init__(self, cadastral_object):
self.parent_cadastral_region = cadastral_object.cadastral_region
self.parent_cadastral_district = cadastral_object.cadastral_district
self.parent_cadastral_block = cadastral_object.cadastral_block
self.parent_cadastral_object = cadastral_object.cadastral_object

# FK fields
parent_cadastral_region = Column(Integer, nullable=False)
parent_cadastral_district = Column(Integer, nullable=False)
parent_cadastral_block = Column(Integer, nullable=False)
parent_cadastral_object = Column(Integer, nullable=False)


What is the code I need to make Flat.encumbrances, etc work. Should I write 
custom query in a iCadastralObject class? Something like encumbrances = 
session.query(iEncumbrance).__filter__...?
I guess the example is pretty straightforward, can you supply me with some 
code I should use.

And if I have for example 10 child classes, in this cade i will have a lot 
of boilerplate code right?

вторник, 16 мая 2017 г., 9:41:02 UTC+10 пользователь Mike Bayer написал:
>
> when you use concrete inheritance, you now have three tables: building, 
> flat, construction.   If you'd like these to each have a relationship to 
> iencumbrance, that's three separate foreign key constraints.  Given the 
> four-column primary key, you'd need to have twelve columns total on 
> iencumbrance and three constraints.   You cannot have a relationship 
> from the "abstract" class alone, this class does not actually correspond 
> to any table. 
>
> Overall, if you do not need to emit a query of this form: 
>
> objects = session.query(iCastradalObject).filter(...).all() 
>
> then you should not use concrete inheritance.  you should map building / 
> flat / construction alone.  if you'd like them to all have common 
> features, use iCastradalObject as you are and add the __abstract__ flag, 
> don't actually use AbstractConcreteBase.   Concrete inheritance is 
> almost never used. 
>
>
>
> On 05/15/2017 07:03 PM, Никита Крокош wrote: 
> > This is a duplicate from: 
> > 
> http://stackoverflow.com/questions/43972912/abstract-table-concrete-inheritance-could-not-determine-join-condition-between
>  
> > I've got following example code: 
> > 
> > models.py 
> > | 
> > classCadastralObject(Base): 
> >  __tablename__ ='cadastral_object' 
> > 
> def__init__(self,cadastral_region,cadastral_district,cadastral_block,cadastral_object):
>  
>
> > self.cadastral_region =cadastral_region 
> > self.cadastral_district =cadastral_district 
> > self.cadastral_block =cadastral_block 
> > self.cadastral_object =cadastral_object 
> > 
> > # this is a combined PK 
> >  cadastral_region =Column(Integer,primary_key=True,index=True) 
> >  cadastral_district =Column(Integer,primary_key=True,index=True) 
> >  cadastral_block =Column(Integer,primary_key=True,index=True) 
> >  cadastral_object =Column(Integer,primary_key=True,index=True) 
> > 
> >  encumbrances =relationship("Encumbrance") 
> > 
> > classEncumbrance(Base): 
> >  __tablename__ ='encumbrance' 
> >  id =Column(Integer,primary_key=True,index=True) 
> > def__init__(self,cadastral_object): 
> > self.parent_cadastral_region =cadastral_object.cadastral_region 
> > self.parent_cadastral_district =cadastral_object.cadastral_district 
> > self.parent_cadastral_block =cadastral_object.cadastral_block 
> > self.parent_cadastral_object =cadastral_object.cadastral_object 
> > 
> > # FK fields 
> >  parent_cadastral_region =Column(Integer,nullable=False) 
> >  parent_cadastral_district =Column(Integer,nullable=False) 
> >  parent_cadastral_block =Column(Integer,nullable=False) 
> >  parent_cadastral_object =Colu

Re: [sqlalchemy] Abstract Table Concrete Inheritance. Could not determine join condition between parent/child tables

2017-05-15 Thread mike bayer
when you use concrete inheritance, you now have three tables: building, 
flat, construction.   If you'd like these to each have a relationship to 
iencumbrance, that's three separate foreign key constraints.  Given the 
four-column primary key, you'd need to have twelve columns total on 
iencumbrance and three constraints.   You cannot have a relationship 
from the "abstract" class alone, this class does not actually correspond 
to any table.


Overall, if you do not need to emit a query of this form:

objects = session.query(iCastradalObject).filter(...).all()

then you should not use concrete inheritance.  you should map building / 
flat / construction alone.  if you'd like them to all have common 
features, use iCastradalObject as you are and add the __abstract__ flag, 
don't actually use AbstractConcreteBase.   Concrete inheritance is 
almost never used.




On 05/15/2017 07:03 PM, Никита Крокош wrote:
This is a duplicate from: 
http://stackoverflow.com/questions/43972912/abstract-table-concrete-inheritance-could-not-determine-join-condition-between

I've got following example code:

models.py
|
classCadastralObject(Base):
 __tablename__ ='cadastral_object'
def__init__(self,cadastral_region,cadastral_district,cadastral_block,cadastral_object):
self.cadastral_region =cadastral_region
self.cadastral_district =cadastral_district
self.cadastral_block =cadastral_block
self.cadastral_object =cadastral_object

# this is a combined PK
 cadastral_region =Column(Integer,primary_key=True,index=True)
 cadastral_district =Column(Integer,primary_key=True,index=True)
 cadastral_block =Column(Integer,primary_key=True,index=True)
 cadastral_object =Column(Integer,primary_key=True,index=True)

 encumbrances =relationship("Encumbrance")

classEncumbrance(Base):
 __tablename__ ='encumbrance'
 id =Column(Integer,primary_key=True,index=True)
def__init__(self,cadastral_object):
self.parent_cadastral_region =cadastral_object.cadastral_region
self.parent_cadastral_district =cadastral_object.cadastral_district
self.parent_cadastral_block =cadastral_object.cadastral_block
self.parent_cadastral_object =cadastral_object.cadastral_object

# FK fields
 parent_cadastral_region =Column(Integer,nullable=False)
 parent_cadastral_district =Column(Integer,nullable=False)
 parent_cadastral_block =Column(Integer,nullable=False)
 parent_cadastral_object =Column(Integer,nullable=False)

 parent_object =relationship(CadastralObject)
 __table_args__ =(ForeignKeyConstraint(
[
 parent_cadastral_region,
 parent_cadastral_district,
 parent_cadastral_block,
 parent_cadastral_object],
[
CadastralObject.cadastral_region,
CadastralObject.cadastral_district,
CadastralObject.cadastral_block,
CadastralObject.cadastral_object]),
{}
)
|

this code works as intended:

main.py

|
 c =CadastralObject(1,2,3,4)
 session.add(c)
 e =Encumbrance(c)
 session.add(e)
 session.commit()
print(c.encumbrances)
print(e.parent_object)
|


results:

|
[]

|


however, when I'm trying convert my code to Concrete Inheritance:

imodels.py

|
classiCadastralObject(AbstractConcreteBase,Base):

def__init__(self,cadastral_region,cadastral_district,cadastral_block,cadastral_object):
self.cadastral_region =cadastral_region
self.cadastral_district =cadastral_district
self.cadastral_block =cadastral_block
self.cadastral_object =cadastral_object

# this is a combined PK
 cadastral_region =Column(Integer,primary_key=True,index=True)
 cadastral_district =Column(Integer,primary_key=True,index=True)
 cadastral_block =Column(Integer,primary_key=True,index=True)
 cadastral_object =Column(Integer,primary_key=True,index=True)

@declared_attr
defencumbrances(self):
returnrelationship("iEncumbrance")



classBuilding(iCadastralObject):
 __tablename__ ='building'

 __mapper_args__ ={
'polymorphic_identity':'building',
'concrete':True
}

@declared_attr
defencumbrances(self):
returnrelationship("iEncumbrance")


classFlat(iCadastralObject):
 __tablename__ ='flat'

 __mapper_args__ ={
'polymorphic_identity':'flat',
'concrete':True
}

@declared_attr
defencumbrances(self):
returnrelationship("iEncumbrance")


classConstruction(iCadastralObject):
 __tablename__ ='construction'

 __mapper_args__ ={
'polymorphic_identity':'construction',
'concrete':True
}


classiEncumbrance(Base):
 __tablename__ ='iencumbrance'

 id =Column(Integer,primary_key=True,index=True)

def__init__(self,cadastral_object):
self.parent_cadastral_region =cadastral_object.cadastral_region
self.parent_cadastral_district =cadastral_object.cadastral_district
self.parent_cadastral_block =cadastral_object.cadastral_block
self.parent_cadastral_object =cadastral_object.cadastral_object

# FK fields
 parent_cadastral_region =Column(Integer,nullabl

[sqlalchemy] Abstract Table Concrete Inheritance. Could not determine join condition between parent/child tables

2017-05-15 Thread Никита Крокош
This is a duplicate from: 
http://stackoverflow.com/questions/43972912/abstract-table-concrete-inheritance-could-not-determine-join-condition-between
I've got following example code:

models.py
 
   class CadastralObject(Base):
__tablename__ = 'cadastral_object'
def __init__(self, cadastral_region, cadastral_district, 
cadastral_block, cadastral_object):
self.cadastral_region = cadastral_region
self.cadastral_district = cadastral_district
self.cadastral_block = cadastral_block
self.cadastral_object = cadastral_object

# this is a combined PK
cadastral_region = Column(Integer, primary_key=True, index=True)
cadastral_district = Column(Integer, primary_key=True, index=True)
cadastral_block = Column(Integer, primary_key=True, index=True)
cadastral_object = Column(Integer, primary_key=True, index=True)

encumbrances = relationship("Encumbrance")

class Encumbrance(Base):
__tablename__ = 'encumbrance'
id = Column(Integer, primary_key=True, index=True)
def __init__(self, cadastral_object):
self.parent_cadastral_region = cadastral_object.cadastral_region
self.parent_cadastral_district = cadastral_object.
cadastral_district
self.parent_cadastral_block = cadastral_object.cadastral_block
self.parent_cadastral_object = cadastral_object.cadastral_object

# FK fields
parent_cadastral_region = Column(Integer, nullable=False)
parent_cadastral_district = Column(Integer, nullable=False)
parent_cadastral_block = Column(Integer, nullable=False)
parent_cadastral_object = Column(Integer, nullable=False)

parent_object = relationship(CadastralObject)
__table_args__ = (ForeignKeyConstraint(
[
parent_cadastral_region,
parent_cadastral_district,
parent_cadastral_block,
parent_cadastral_object],
[
CadastralObject.cadastral_region,
CadastralObject.cadastral_district,
CadastralObject.cadastral_block,
CadastralObject.cadastral_object]),
  {}
)

this code works as intended:

main.py

c = CadastralObject(1, 2, 3, 4)
session.add(c)
e = Encumbrance(c)
session.add(e)
session.commit()
print(c.encumbrances)
print(e.parent_object)


results:

[]



however, when I'm trying convert my code to Concrete Inheritance:

imodels.py

class iCadastralObject(AbstractConcreteBase, Base):

def __init__(self, cadastral_region, cadastral_district, 
cadastral_block, cadastral_object):
self.cadastral_region = cadastral_region
self.cadastral_district = cadastral_district
self.cadastral_block = cadastral_block
self.cadastral_object = cadastral_object

# this is a combined PK
cadastral_region = Column(Integer, primary_key=True, index=True)
cadastral_district = Column(Integer, primary_key=True, index=True)
cadastral_block = Column(Integer, primary_key=True, index=True)
cadastral_object = Column(Integer, primary_key=True, index=True)

@declared_attr
def encumbrances(self):
return relationship("iEncumbrance")



class Building(iCadastralObject):
__tablename__ = 'building'

__mapper_args__ = {
'polymorphic_identity': 'building',
'concrete': True
}

@declared_attr
def encumbrances(self):
return relationship("iEncumbrance")


class Flat(iCadastralObject):
__tablename__ = 'flat'

__mapper_args__ = {
'polymorphic_identity': 'flat',
'concrete': True
}

@declared_attr
def encumbrances(self):
return relationship("iEncumbrance")


class Construction(iCadastralObject):
__tablename__ = 'construction'

__mapper_args__ = {
'polymorphic_identity': 'construction',
'concrete': True
}


class iEncumbrance(Base):
__tablename__ = 'iencumbrance'

id = Column(Integer, primary_key=True, index=True)

def __init__(self, cadastral_object):
self.parent_cadastral_region = cadastral_object.cadastral_region
self.parent_cadastral_district = cadastral_object.
cadastral_district
self.parent_cadastral_block = cadastral_object.cadastral_block
self.parent_cadastral_object = cadastral_object.cadastral_object

# FK fields
parent_cadastral_region = Column(Integer, nullable=False)
parent_cadastral_district = Column(Integer, nullable=False)
parent_cadastral_block = Column(Integer, nullable=False)
parent_cadastra

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread mike bayer



On 05/15/2017 11:56 AM, Zsolt Ero wrote:

I might not be understanding something, but for me there are two
different concepts here:

map_obj = dbsession.query(Map).get(id_)

is an object in memory, loaded with a long SELECT statement, allowing
us to get and set different attributes and the session / transaction
manager commits the auto-detected changes.

Whereas with

dbsession.query(Map).filter(Map.id == id_).update(
{"screenshots": func.jsonb_set(Map.screenshots, '{size}',
'"filename.jpg"')}, synchronize_session='fetch')

there is no object in memory, what we are writing here is just a nicer
syntax for a one line SQL UPDATE query. Even the triggered SELECT
statement is just querying for a single .id, which we have anyway.


the purpose of synchronize_session is only if you happened to run *both* 
Python statements, so that you have map_obj present as a local variable, 
and wish to expire the now stale value of map_obj.screenshots, so that 
when you next access it, a SELECT is emitted to get the most recent 
value, e.g. the one that's the result of your UPDATE statement.


the "fetch" strategy is actually wasteful here because it runs the 
query() as a SELECT in order to locate the primary keys of the objects 
that might be locally present, but your query is simple enough that this 
is already apparent.  the "fetch" strategy currently doesn't even bother 
to get the new value right now and just expires, I forgot about this. 
this is why you see just the one wasteful SELECT statement.   "fetch" 
probably should be improved to actually fetch and directly update the 
values for the instances it locates, not sure why it wasn't done that 
way to start.


the "evaluate" strategy does everything in Python, but also won't work 
because it currently expects that the values which were set are also 
evaluatable in Python, also should be improved to at least do a simple 
"expire" for attributes that can't be evaluated in Python.


in this case the only strategy left is sychronize_session=False. 
However if you have map_obj in memory, you'd need to run refresh() on it 
to get the new JSON value if you care about it.








Zsolt




On 15 May 2017 at 17:29, mike bayer  wrote:



On 05/15/2017 10:54 AM, Zsolt Ero wrote:


Thanks, it is all clear now. Just out of interest, what is the point
of synchronize_session='fetch'?



that will do a SELECT and get the new value back and update your ORM object
in memory.  Set synchronize_session=False if you don't care.





For me all it does is a simple SELECT maps.id AS maps_id FROM maps
WHERE maps.id = %(id_1)s

All I get as a return value is 0: not successful (probably id didn't
exist), while 1: successful. It is the same behaviour both with
'fetch' and False.

Zsolt

On 15 May 2017 at 16:33, mike bayer  wrote:




On 05/15/2017 10:31 AM, Zsolt Ero wrote:



I'm trying to run your example, but it doesn't work:

from sqlalchemy import func

m = request.dbsession.query(models.Map).get(3)
m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
request.dbsession.flush()

It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.




jsonb_set(models.Map.screenshots, ...)

because this works against the column, not the value







Also, from the generated SQL it seems to me that it's also doing the
full JSONB update from client side, not just inserting a key into the
database server side.

UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
%(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
{'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
u'68b3f51491ff1501.jpg'}}

On 15 May 2017 at 16:18, Zsolt Ero  wrote:



Thanks for the answer. My use case is the following:

I have an object (map_obj), which has screenshots in two sizes. I'm
using JSONB columns to store the screenshot filenames.

Now, the two screenshot sizes are generated in parallel. The code is
like the following:

map_obj = query(...by id...)
filename = generate_screenshot(size)  # long running screenshot
generation

try:
   dbsession.refresh(map_obj, ['screenshots'])
   map_obj.screenshots = dict(map_obj.screenshots, **{size:
filename})
except Exception as e:
   logger.warning(...)

It worked well for 99.9% of the cases. The problem is that in the rare
case when both screenshots got rendered within a few milliseconds, one
of the screenshots got lost.

The simple solution was to add lockmode='update' to the refresh, so
this way the refreshes are blocking until the other finishes the
update.

But since this means locking a full row, I was thinking a simple JSONB
insertion would probably be better, since I can avoid locking the row.

Zsolt




On 15 May 2017 at 15:58, mike bayer  wrote:





On 05/15/2017 09:32 AM, Zsolt Ero wrote:




In PostgreSQL 9.5+ it is finally possible to modify a single key
inside
a
JSONB column. Usage is something like this:

update maps set screenshots=jsonb_s

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Zsolt Ero
I might not be understanding something, but for me there are two
different concepts here:

map_obj = dbsession.query(Map).get(id_)

is an object in memory, loaded with a long SELECT statement, allowing
us to get and set different attributes and the session / transaction
manager commits the auto-detected changes.

Whereas with

dbsession.query(Map).filter(Map.id == id_).update(
   {"screenshots": func.jsonb_set(Map.screenshots, '{size}',
'"filename.jpg"')}, synchronize_session='fetch')

there is no object in memory, what we are writing here is just a nicer
syntax for a one line SQL UPDATE query. Even the triggered SELECT
statement is just querying for a single .id, which we have anyway.

Zsolt




On 15 May 2017 at 17:29, mike bayer  wrote:
>
>
> On 05/15/2017 10:54 AM, Zsolt Ero wrote:
>>
>> Thanks, it is all clear now. Just out of interest, what is the point
>> of synchronize_session='fetch'?
>
>
> that will do a SELECT and get the new value back and update your ORM object
> in memory.  Set synchronize_session=False if you don't care.
>
>
>
>>
>> For me all it does is a simple SELECT maps.id AS maps_id FROM maps
>> WHERE maps.id = %(id_1)s
>>
>> All I get as a return value is 0: not successful (probably id didn't
>> exist), while 1: successful. It is the same behaviour both with
>> 'fetch' and False.
>>
>> Zsolt
>>
>> On 15 May 2017 at 16:33, mike bayer  wrote:
>>>
>>>
>>>
>>> On 05/15/2017 10:31 AM, Zsolt Ero wrote:


 I'm trying to run your example, but it doesn't work:

 from sqlalchemy import func

 m = request.dbsession.query(models.Map).get(3)
 m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
 request.dbsession.flush()

 It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.
>>>
>>>
>>>
>>> jsonb_set(models.Map.screenshots, ...)
>>>
>>> because this works against the column, not the value
>>>
>>>
>>>
>>>
>>>

 Also, from the generated SQL it seems to me that it's also doing the
 full JSONB update from client side, not just inserting a key into the
 database server side.

 UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
 %(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
 {'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
 'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
 u'68b3f51491ff1501.jpg'}}

 On 15 May 2017 at 16:18, Zsolt Ero  wrote:
>
>
> Thanks for the answer. My use case is the following:
>
> I have an object (map_obj), which has screenshots in two sizes. I'm
> using JSONB columns to store the screenshot filenames.
>
> Now, the two screenshot sizes are generated in parallel. The code is
> like the following:
>
> map_obj = query(...by id...)
> filename = generate_screenshot(size)  # long running screenshot
> generation
>
> try:
>   dbsession.refresh(map_obj, ['screenshots'])
>   map_obj.screenshots = dict(map_obj.screenshots, **{size:
> filename})
> except Exception as e:
>   logger.warning(...)
>
> It worked well for 99.9% of the cases. The problem is that in the rare
> case when both screenshots got rendered within a few milliseconds, one
> of the screenshots got lost.
>
> The simple solution was to add lockmode='update' to the refresh, so
> this way the refreshes are blocking until the other finishes the
> update.
>
> But since this means locking a full row, I was thinking a simple JSONB
> insertion would probably be better, since I can avoid locking the row.
>
> Zsolt
>
>
>
>
> On 15 May 2017 at 15:58, mike bayer  wrote:
>>
>>
>>
>>
>> On 05/15/2017 09:32 AM, Zsolt Ero wrote:
>>>
>>>
>>>
>>> In PostgreSQL 9.5+ it is finally possible to modify a single key
>>> inside
>>> a
>>> JSONB column. Usage is something like this:
>>>
>>> update maps set screenshots=jsonb_set(screenshots, '{key}',
>>> '"value"')
>>> where id = 10688
>>>
>>> Is it possible to write this query using the ORM somehow? If not,
>>> please
>>> take it as a feature request.
>>
>>
>>
>>
>>
>> You can use that function directly:
>>
>> my_object = session.query(Maps).get(5)
>>
>> my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
>> '"value"')
>>
>> session.flush()
>>
>>
>> as far as "transparent" ORM use of that, like this:
>>
>> my_object.screenshots[key] = "value"
>>
>> right now that is a mutation of the value, and assuming you were using
>> MutableDict to detect this as an ORM change event, the ORM considers
>> "screenshots" to be a single value that would be the target of an
>> UPDATE,
>> meaning the whole JSON dictionary is passed into the UPDATE. There is
>> no
>> infrastructure for the

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread mike bayer



On 05/15/2017 10:54 AM, Zsolt Ero wrote:

Thanks, it is all clear now. Just out of interest, what is the point
of synchronize_session='fetch'?


that will do a SELECT and get the new value back and update your ORM 
object in memory.  Set synchronize_session=False if you don't care.





For me all it does is a simple SELECT maps.id AS maps_id FROM maps
WHERE maps.id = %(id_1)s

All I get as a return value is 0: not successful (probably id didn't
exist), while 1: successful. It is the same behaviour both with
'fetch' and False.

Zsolt

On 15 May 2017 at 16:33, mike bayer  wrote:



On 05/15/2017 10:31 AM, Zsolt Ero wrote:


I'm trying to run your example, but it doesn't work:

from sqlalchemy import func

m = request.dbsession.query(models.Map).get(3)
m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
request.dbsession.flush()

It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.



jsonb_set(models.Map.screenshots, ...)

because this works against the column, not the value







Also, from the generated SQL it seems to me that it's also doing the
full JSONB update from client side, not just inserting a key into the
database server side.

UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
%(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
{'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
u'68b3f51491ff1501.jpg'}}

On 15 May 2017 at 16:18, Zsolt Ero  wrote:


Thanks for the answer. My use case is the following:

I have an object (map_obj), which has screenshots in two sizes. I'm
using JSONB columns to store the screenshot filenames.

Now, the two screenshot sizes are generated in parallel. The code is
like the following:

map_obj = query(...by id...)
filename = generate_screenshot(size)  # long running screenshot
generation

try:
  dbsession.refresh(map_obj, ['screenshots'])
  map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
except Exception as e:
  logger.warning(...)

It worked well for 99.9% of the cases. The problem is that in the rare
case when both screenshots got rendered within a few milliseconds, one
of the screenshots got lost.

The simple solution was to add lockmode='update' to the refresh, so
this way the refreshes are blocking until the other finishes the
update.

But since this means locking a full row, I was thinking a simple JSONB
insertion would probably be better, since I can avoid locking the row.

Zsolt




On 15 May 2017 at 15:58, mike bayer  wrote:




On 05/15/2017 09:32 AM, Zsolt Ero wrote:



In PostgreSQL 9.5+ it is finally possible to modify a single key inside
a
JSONB column. Usage is something like this:

update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
where id = 10688

Is it possible to write this query using the ORM somehow? If not,
please
take it as a feature request.





You can use that function directly:

my_object = session.query(Maps).get(5)

my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
'"value"')

session.flush()


as far as "transparent" ORM use of that, like this:

my_object.screenshots[key] = "value"

right now that is a mutation of the value, and assuming you were using
MutableDict to detect this as an ORM change event, the ORM considers
"screenshots" to be a single value that would be the target of an
UPDATE,
meaning the whole JSON dictionary is passed into the UPDATE. There is no
infrastructure for the ORM to automatically turn certain column updates
into
finely-detailed SQL function calls.   I can imagine that there might be
some
event-based way to make this happen transparently within the flush,
however,
but I'd need to spend some time poking around to work out how that might
work.


I'm not familiar with what the advantage to jsonb_set() would be and I
can
only guess it's some kind of performance advantage.   I'd be curious to
see
under what scenarios being able to set one element of the JSON vs.
UPDATEing
the whole thing is a performance advantage significant compared to the
usual
overhead of the ORM flush process; that is, Postgresql is really fast,
and
for this optimization to be significant, you probably need to be calling
the
Core function directly anyway rather than going through the whole ORM
flush
process.   But this is all based on my assumption as to what your goal
of
using this function is.







--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google
Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqla

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Jonathan Vanasco

On Monday, May 15, 2017 at 9:58:57 AM UTC-4, Mike Bayer wrote:
>
> I'd be curious to see under what scenarios being able to set one element 
> of the JSON 
> vs. UPDATEing the whole thing is a performance advantage significant 
> compared to the usual overhead of the ORM flush process; that is, 
> Postgresql is really fast, and for this optimization to be significant, 
> you probably need to be calling the Core function directly anyway rather 
> than going through the whole ORM flush process.  


I did a bunch of tests on this a while back, but in regards to the 
very-similar HTSORE column and some tests on JSONB.

The big takeaways--

* after a certain amount of data is in the column, the most significant 
issue is bandwidth and timing from the payload transfer.
* there is a decent performance update if you're in a sweet spot where the 
column payload is TOASTable and that's the only update.  in that instance, 
postgres just updates the toast table  -- otherwise it does the standard 
routine of "mark the old row for deletion, copy the row and update it as 
the new row".  toasting a jsonb column has been tweaked a lot, the last 
time I checked it had to be "just right" -- big enough to toast, but small 
enough to fit in a single toast column.  

tldr, it won't noticeably affect performance for most situations.  

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Zsolt Ero
Thanks, it is all clear now. Just out of interest, what is the point
of synchronize_session='fetch'?

For me all it does is a simple SELECT maps.id AS maps_id FROM maps
WHERE maps.id = %(id_1)s

All I get as a return value is 0: not successful (probably id didn't
exist), while 1: successful. It is the same behaviour both with
'fetch' and False.

Zsolt

On 15 May 2017 at 16:33, mike bayer  wrote:
>
>
> On 05/15/2017 10:31 AM, Zsolt Ero wrote:
>>
>> I'm trying to run your example, but it doesn't work:
>>
>> from sqlalchemy import func
>>
>> m = request.dbsession.query(models.Map).get(3)
>> m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
>> request.dbsession.flush()
>>
>> It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.
>
>
> jsonb_set(models.Map.screenshots, ...)
>
> because this works against the column, not the value
>
>
>
>
>
>>
>> Also, from the generated SQL it seems to me that it's also doing the
>> full JSONB update from client side, not just inserting a key into the
>> database server side.
>>
>> UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
>> %(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
>> {'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
>> 'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
>> u'68b3f51491ff1501.jpg'}}
>>
>> On 15 May 2017 at 16:18, Zsolt Ero  wrote:
>>>
>>> Thanks for the answer. My use case is the following:
>>>
>>> I have an object (map_obj), which has screenshots in two sizes. I'm
>>> using JSONB columns to store the screenshot filenames.
>>>
>>> Now, the two screenshot sizes are generated in parallel. The code is
>>> like the following:
>>>
>>> map_obj = query(...by id...)
>>> filename = generate_screenshot(size)  # long running screenshot
>>> generation
>>>
>>> try:
>>>  dbsession.refresh(map_obj, ['screenshots'])
>>>  map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
>>> except Exception as e:
>>>  logger.warning(...)
>>>
>>> It worked well for 99.9% of the cases. The problem is that in the rare
>>> case when both screenshots got rendered within a few milliseconds, one
>>> of the screenshots got lost.
>>>
>>> The simple solution was to add lockmode='update' to the refresh, so
>>> this way the refreshes are blocking until the other finishes the
>>> update.
>>>
>>> But since this means locking a full row, I was thinking a simple JSONB
>>> insertion would probably be better, since I can avoid locking the row.
>>>
>>> Zsolt
>>>
>>>
>>>
>>>
>>> On 15 May 2017 at 15:58, mike bayer  wrote:



 On 05/15/2017 09:32 AM, Zsolt Ero wrote:
>
>
> In PostgreSQL 9.5+ it is finally possible to modify a single key inside
> a
> JSONB column. Usage is something like this:
>
> update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
> where id = 10688
>
> Is it possible to write this query using the ORM somehow? If not,
> please
> take it as a feature request.




 You can use that function directly:

 my_object = session.query(Maps).get(5)

 my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
 '"value"')

 session.flush()


 as far as "transparent" ORM use of that, like this:

 my_object.screenshots[key] = "value"

 right now that is a mutation of the value, and assuming you were using
 MutableDict to detect this as an ORM change event, the ORM considers
 "screenshots" to be a single value that would be the target of an
 UPDATE,
 meaning the whole JSON dictionary is passed into the UPDATE. There is no
 infrastructure for the ORM to automatically turn certain column updates
 into
 finely-detailed SQL function calls.   I can imagine that there might be
 some
 event-based way to make this happen transparently within the flush,
 however,
 but I'd need to spend some time poking around to work out how that might
 work.


 I'm not familiar with what the advantage to jsonb_set() would be and I
 can
 only guess it's some kind of performance advantage.   I'd be curious to
 see
 under what scenarios being able to set one element of the JSON vs.
 UPDATEing
 the whole thing is a performance advantage significant compared to the
 usual
 overhead of the ORM flush process; that is, Postgresql is really fast,
 and
 for this optimization to be significant, you probably need to be calling
 the
 Core function directly anyway rather than going through the whole ORM
 flush
 process.   But this is all based on my assumption as to what your goal
 of
 using this function is.



>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
>

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread mike bayer



On 05/15/2017 10:31 AM, Zsolt Ero wrote:

I'm trying to run your example, but it doesn't work:

from sqlalchemy import func

m = request.dbsession.query(models.Map).get(3)
m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
request.dbsession.flush()

It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.


jsonb_set(models.Map.screenshots, ...)

because this works against the column, not the value






Also, from the generated SQL it seems to me that it's also doing the
full JSONB update from client side, not just inserting a key into the
database server side.

UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
%(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
{'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
u'68b3f51491ff1501.jpg'}}

On 15 May 2017 at 16:18, Zsolt Ero  wrote:

Thanks for the answer. My use case is the following:

I have an object (map_obj), which has screenshots in two sizes. I'm
using JSONB columns to store the screenshot filenames.

Now, the two screenshot sizes are generated in parallel. The code is
like the following:

map_obj = query(...by id...)
filename = generate_screenshot(size)  # long running screenshot generation

try:
 dbsession.refresh(map_obj, ['screenshots'])
 map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
except Exception as e:
 logger.warning(...)

It worked well for 99.9% of the cases. The problem is that in the rare
case when both screenshots got rendered within a few milliseconds, one
of the screenshots got lost.

The simple solution was to add lockmode='update' to the refresh, so
this way the refreshes are blocking until the other finishes the
update.

But since this means locking a full row, I was thinking a simple JSONB
insertion would probably be better, since I can avoid locking the row.

Zsolt




On 15 May 2017 at 15:58, mike bayer  wrote:



On 05/15/2017 09:32 AM, Zsolt Ero wrote:


In PostgreSQL 9.5+ it is finally possible to modify a single key inside a
JSONB column. Usage is something like this:

update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
where id = 10688

Is it possible to write this query using the ORM somehow? If not, please
take it as a feature request.




You can use that function directly:

my_object = session.query(Maps).get(5)

my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
'"value"')

session.flush()


as far as "transparent" ORM use of that, like this:

my_object.screenshots[key] = "value"

right now that is a mutation of the value, and assuming you were using
MutableDict to detect this as an ORM change event, the ORM considers
"screenshots" to be a single value that would be the target of an UPDATE,
meaning the whole JSON dictionary is passed into the UPDATE. There is no
infrastructure for the ORM to automatically turn certain column updates into
finely-detailed SQL function calls.   I can imagine that there might be some
event-based way to make this happen transparently within the flush, however,
but I'd need to spend some time poking around to work out how that might
work.


I'm not familiar with what the advantage to jsonb_set() would be and I can
only guess it's some kind of performance advantage.   I'd be curious to see
under what scenarios being able to set one element of the JSON vs. UPDATEing
the whole thing is a performance advantage significant compared to the usual
overhead of the ORM flush process; that is, Postgresql is really fast, and
for this optimization to be significant, you probably need to be calling the
Core function directly anyway rather than going through the whole ORM flush
process.   But this is all based on my assumption as to what your goal of
using this function is.







--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.



--
SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
description.
--- You received this message because you are subscribed to a topic in the
Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit
https://gro

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread mike bayer



On 05/15/2017 10:18 AM, Zsolt Ero wrote:

Thanks for the answer. My use case is the following:

I have an object (map_obj), which has screenshots in two sizes. I'm
using JSONB columns to store the screenshot filenames.

Now, the two screenshot sizes are generated in parallel. The code is
like the following:

map_obj = query(...by id...)
filename = generate_screenshot(size)  # long running screenshot generation

try:
 dbsession.refresh(map_obj, ['screenshots'])
 map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
except Exception as e:
 logger.warning(...)

It worked well for 99.9% of the cases. The problem is that in the rare
case when both screenshots got rendered within a few milliseconds, one
of the screenshots got lost.

The simple solution was to add lockmode='update' to the refresh, so
this way the refreshes are blocking until the other finishes the
update.

But since this means locking a full row, I was thinking a simple JSONB
insertion would probably be better, since I can avoid locking the row.



OK since you're looking to get around a race and do an "atomic" update, 
I'd recommend running UPDATE straight, with ORM you can get this with 
query.update()


session.query(YourClass).filter(YourClass.id == 
whatever).update({"screenshots": func.jsonb_set(YourClass.screenshots, 
"key", "value")}, synchronize_session='fetch')


that will also refetch the current value of the row





Zsolt




On 15 May 2017 at 15:58, mike bayer  wrote:



On 05/15/2017 09:32 AM, Zsolt Ero wrote:


In PostgreSQL 9.5+ it is finally possible to modify a single key inside a
JSONB column. Usage is something like this:

update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
where id = 10688

Is it possible to write this query using the ORM somehow? If not, please
take it as a feature request.




You can use that function directly:

my_object = session.query(Maps).get(5)

my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
'"value"')

session.flush()


as far as "transparent" ORM use of that, like this:

my_object.screenshots[key] = "value"

right now that is a mutation of the value, and assuming you were using
MutableDict to detect this as an ORM change event, the ORM considers
"screenshots" to be a single value that would be the target of an UPDATE,
meaning the whole JSON dictionary is passed into the UPDATE. There is no
infrastructure for the ORM to automatically turn certain column updates into
finely-detailed SQL function calls.   I can imagine that there might be some
event-based way to make this happen transparently within the flush, however,
but I'd need to spend some time poking around to work out how that might
work.


I'm not familiar with what the advantage to jsonb_set() would be and I can
only guess it's some kind of performance advantage.   I'd be curious to see
under what scenarios being able to set one element of the JSON vs. UPDATEing
the whole thing is a performance advantage significant compared to the usual
overhead of the ORM flush process; that is, Postgresql is really fast, and
for this optimization to be significant, you probably need to be calling the
Core function directly anyway rather than going through the whole ORM flush
process.   But this is all based on my assumption as to what your goal of
using this function is.







--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.



--
SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
description.
--- You received this message because you are subscribed to a topic in the
Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/hjjIyEC8KHQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example c

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Zsolt Ero
I'm trying to run your example, but it doesn't work:

from sqlalchemy import func

m = request.dbsession.query(models.Map).get(3)
m.screenshots = func.jsonb_set(m.screenshots, '{key}', '"value"')
request.dbsession.flush()

It ends up in a (psycopg2.ProgrammingError) can't adapt type 'dict'.

Also, from the generated SQL it seems to me that it's also doing the
full JSONB update from client side, not just inserting a key into the
database server side.

UPDATE maps SET screenshots=jsonb_set(%(jsonb_set_1)s,
%(jsonb_set_2)s, %(jsonb_set_3)s) WHERE maps.id = %(maps_id)s
{'maps_id': 3, 'jsonb_set_3': '"value"', 'jsonb_set_2': '{key}',
'jsonb_set_1': {u'small': u'2ad139ee69cdcd9e.jpg', u'full':
u'68b3f51491ff1501.jpg'}}

On 15 May 2017 at 16:18, Zsolt Ero  wrote:
> Thanks for the answer. My use case is the following:
>
> I have an object (map_obj), which has screenshots in two sizes. I'm
> using JSONB columns to store the screenshot filenames.
>
> Now, the two screenshot sizes are generated in parallel. The code is
> like the following:
>
> map_obj = query(...by id...)
> filename = generate_screenshot(size)  # long running screenshot generation
>
> try:
> dbsession.refresh(map_obj, ['screenshots'])
> map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
> except Exception as e:
> logger.warning(...)
>
> It worked well for 99.9% of the cases. The problem is that in the rare
> case when both screenshots got rendered within a few milliseconds, one
> of the screenshots got lost.
>
> The simple solution was to add lockmode='update' to the refresh, so
> this way the refreshes are blocking until the other finishes the
> update.
>
> But since this means locking a full row, I was thinking a simple JSONB
> insertion would probably be better, since I can avoid locking the row.
>
> Zsolt
>
>
>
>
> On 15 May 2017 at 15:58, mike bayer  wrote:
>>
>>
>> On 05/15/2017 09:32 AM, Zsolt Ero wrote:
>>>
>>> In PostgreSQL 9.5+ it is finally possible to modify a single key inside a
>>> JSONB column. Usage is something like this:
>>>
>>> update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
>>> where id = 10688
>>>
>>> Is it possible to write this query using the ORM somehow? If not, please
>>> take it as a feature request.
>>
>>
>>
>> You can use that function directly:
>>
>> my_object = session.query(Maps).get(5)
>>
>> my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
>> '"value"')
>>
>> session.flush()
>>
>>
>> as far as "transparent" ORM use of that, like this:
>>
>> my_object.screenshots[key] = "value"
>>
>> right now that is a mutation of the value, and assuming you were using
>> MutableDict to detect this as an ORM change event, the ORM considers
>> "screenshots" to be a single value that would be the target of an UPDATE,
>> meaning the whole JSON dictionary is passed into the UPDATE. There is no
>> infrastructure for the ORM to automatically turn certain column updates into
>> finely-detailed SQL function calls.   I can imagine that there might be some
>> event-based way to make this happen transparently within the flush, however,
>> but I'd need to spend some time poking around to work out how that might
>> work.
>>
>>
>> I'm not familiar with what the advantage to jsonb_set() would be and I can
>> only guess it's some kind of performance advantage.   I'd be curious to see
>> under what scenarios being able to set one element of the JSON vs. UPDATEing
>> the whole thing is a performance advantage significant compared to the usual
>> overhead of the ORM flush process; that is, Postgresql is really fast, and
>> for this optimization to be significant, you probably need to be calling the
>> Core function directly anyway rather than going through the whole ORM flush
>> process.   But this is all based on my assumption as to what your goal of
>> using this function is.
>>
>>
>>
>>>
>>>
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to sqlalchemy+unsubscr...@googlegroups.com
>>> .
>>> To post to this group, send email to sqlalchemy@googlegroups.com
>>> .
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
>> description.
>>

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Zsolt Ero
Thanks for the answer. My use case is the following:

I have an object (map_obj), which has screenshots in two sizes. I'm
using JSONB columns to store the screenshot filenames.

Now, the two screenshot sizes are generated in parallel. The code is
like the following:

map_obj = query(...by id...)
filename = generate_screenshot(size)  # long running screenshot generation

try:
dbsession.refresh(map_obj, ['screenshots'])
map_obj.screenshots = dict(map_obj.screenshots, **{size: filename})
except Exception as e:
logger.warning(...)

It worked well for 99.9% of the cases. The problem is that in the rare
case when both screenshots got rendered within a few milliseconds, one
of the screenshots got lost.

The simple solution was to add lockmode='update' to the refresh, so
this way the refreshes are blocking until the other finishes the
update.

But since this means locking a full row, I was thinking a simple JSONB
insertion would probably be better, since I can avoid locking the row.

Zsolt




On 15 May 2017 at 15:58, mike bayer  wrote:
>
>
> On 05/15/2017 09:32 AM, Zsolt Ero wrote:
>>
>> In PostgreSQL 9.5+ it is finally possible to modify a single key inside a
>> JSONB column. Usage is something like this:
>>
>> update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"')
>> where id = 10688
>>
>> Is it possible to write this query using the ORM somehow? If not, please
>> take it as a feature request.
>
>
>
> You can use that function directly:
>
> my_object = session.query(Maps).get(5)
>
> my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}',
> '"value"')
>
> session.flush()
>
>
> as far as "transparent" ORM use of that, like this:
>
> my_object.screenshots[key] = "value"
>
> right now that is a mutation of the value, and assuming you were using
> MutableDict to detect this as an ORM change event, the ORM considers
> "screenshots" to be a single value that would be the target of an UPDATE,
> meaning the whole JSON dictionary is passed into the UPDATE. There is no
> infrastructure for the ORM to automatically turn certain column updates into
> finely-detailed SQL function calls.   I can imagine that there might be some
> event-based way to make this happen transparently within the flush, however,
> but I'd need to spend some time poking around to work out how that might
> work.
>
>
> I'm not familiar with what the advantage to jsonb_set() would be and I can
> only guess it's some kind of performance advantage.   I'd be curious to see
> under what scenarios being able to set one element of the JSON vs. UPDATEing
> the whole thing is a performance advantage significant compared to the usual
> overhead of the ORM flush process; that is, Postgresql is really fast, and
> for this optimization to be significant, you probably need to be calling the
> Core function directly anyway rather than going through the whole ORM flush
> process.   But this is all based on my assumption as to what your goal of
> using this function is.
>
>
>
>>
>>
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com
>> .
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/hjjIyEC8KHQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from 

Re: [sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread mike bayer



On 05/15/2017 09:32 AM, Zsolt Ero wrote:
In PostgreSQL 9.5+ it is finally possible to modify a single key inside 
a JSONB column. Usage is something like this:


update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"') 
where id = 10688


Is it possible to write this query using the ORM somehow? If not, please 
take it as a feature request.



You can use that function directly:

my_object = session.query(Maps).get(5)

my_object.screenshots = func.jsonb_set(my_object.screenshots, '{key}', 
'"value"')


session.flush()


as far as "transparent" ORM use of that, like this:

my_object.screenshots[key] = "value"

right now that is a mutation of the value, and assuming you were using 
MutableDict to detect this as an ORM change event, the ORM considers 
"screenshots" to be a single value that would be the target of an 
UPDATE, meaning the whole JSON dictionary is passed into the UPDATE. 
There is no infrastructure for the ORM to automatically turn certain 
column updates into finely-detailed SQL function calls.   I can imagine 
that there might be some event-based way to make this happen 
transparently within the flush, however, but I'd need to spend some time 
poking around to work out how that might work.



I'm not familiar with what the advantage to jsonb_set() would be and I 
can only guess it's some kind of performance advantage.   I'd be curious 
to see under what scenarios being able to set one element of the JSON 
vs. UPDATEing the whole thing is a performance advantage significant 
compared to the usual overhead of the ORM flush process; that is, 
Postgresql is really fast, and for this optimization to be significant, 
you probably need to be calling the Core function directly anyway rather 
than going through the whole ORM flush process.   But this is all based 
on my assumption as to what your goal of using this function is.








--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and 
Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
description.

---
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] refresh's lockmode and with_for_update

2017-05-15 Thread mike bayer



On 05/15/2017 08:53 AM, Zsolt Ero wrote:

Right now, the documentation for session.refresh() mentions:

lockmode – Passed to the Query 
 
as used by with_lockmode() 
.


Where clicking on with_lockmode() links to the following note:

Deprecated since version 0.9.0: superseded by Query.with_for_update() 
.


My questions are:
1. How should I use refresh with specific update modes? Should I just 
discard that depreciation note?


looking at the code, it seems like refresh() -> lockmode is still 
feeding straight to the old query.with_lockmode() (this is surprising). 
 So w/ refresh, you should use those lockmode arguments, yes.The 
newer with_for_update() separates things into the use of individual 
flags, which is nice, but for session.refresh(), we'd need some way to 
specify that bundle of flags.


This is definitely a bug that with_lockmode() is legacy but we forgot to 
update refresh() so the issue is at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3991/sessionrefresh-load_on_ident-still. 
  "lockmode" will continue to work however.








2. Are there plans to support all 4 update modes of recent PostgreSQL's? 
https://www.postgresql.org/docs/9.6/static/explicit-locking.html
In which case, would it be simpler to just use the DB supplied names 
instead of trying to encode and decode it into boolean parameters to a 
general function? I find that the documentation of PostgreSQL's mode is 
already quite complicated and definitely needs to be properly read by 
anyone trying to use one, so trying to hide it behind a generic function 
might just lead to confusion, in my opinion.


those modes are all supported by with_for_update(), that documentation 
page served as the guide for when the feature was created.   The reason 
there are boolean flags is to support other databases besides Postgresql 
as well as to provide a consistent place to provide for "OF", which 
refrers to a SQL expression.The breakdown of how the flags translate 
to MySQL, Oracle, and Postgresql is at 
http://docs.sqlalchemy.org/en/latest/core/selectable.html#sqlalchemy.sql.expression.GenerativeSelect.with_for_update. 
  A table view of these settings would also be appropriate as an 
addition to the documentation.










Why not just make those specific modes an imports like other specifics 
already are.


from sqlalchemy.dialects.postgresql import lock_key_no_update
q = sess.query(User).with_for_update(lock_key_no_update)
and
session.refresh(instance, lockmode= lock_key_no_update))

Just my idea

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and 
Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
description.

---
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] jsonb_set in PostgreSQL 9.5+

2017-05-15 Thread Zsolt Ero
In PostgreSQL 9.5+ it is finally possible to modify a single key inside a 
JSONB column. Usage is something like this:

update maps set screenshots=jsonb_set(screenshots, '{key}', '"value"') 
where id = 10688

Is it possible to write this query using the ORM somehow? If not, please 
take it as a feature request.



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] complex primary key with server_default timestamp

2017-05-15 Thread mike bayer



On 05/15/2017 06:43 AM, mdob wrote:
Just curious. Let's say we have a complex primary key of user_id 
(integer), project_id (integer) and date (timestamp). After adding and 
committing we don't have the PK and we won't be able to update it. Is 
that right?


if you were using Postgresql this would be no problem, because we use 
RETURNING in the INSERT to get all the server default values back 
directly.  But MySQL does not support this, for primary key values that 
are server-generated, we can only get the AUTO_INCREMENT integer back, 
that's correct.   We don't call last_inserted_id() directly, this is a 
function of the driver which gives us the number via cursor.lastrowid.


For things like dates, for us to know the PK of the row we just 
inserted, we need to generate it before we do the insert.   So in your 
Table def you need to use client side "default=func.utcnow()" or 
whatever so that SQLAlchemy Core knows it can run that default 
generation function as a separate SQL SELECT; this does not preclude 
being able to have a server default on the column as well, it just would 
not be used in the case of ORM insert.





If it was auto-increment integer then it would probably be fine. PK 
would be fetched using last_inster_id() in mysql or similar method in 
other dialects.


|
from sqlalchemy import create_engine, Column, Integer, TIMESTAMP, 
FetchedValue

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

Base = declarative_base()

engine = 
create_engine('mysql+mysqldb://root:@192.168.1.12:3306/test?charset=utf8', 
echo=True)


class Timesheet(Base):
 __tablename__ = 'timesheet'
 user_id = Column(Integer, primary_key=True)
 project_id = Column(Integer, primary_key=True)
 date = Column(TIMESTAMP(), primary_key=True, nullable=False, 
server_default=FetchedValue())



session = sessionmaker(engine)()
t1 = Timesheet(user_id=1, project_id=1)
session.add(t1)
session.commit()

print t1.date
|


|
2017-05-15 11:46:09,411 INFO sqlalchemy.engine.base.Engine SHOW 
VARIABLES LIKE 'sql_mode'

2017-05-15 11:46:09,411 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine SELECT DATABASE()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine show 
collation where `Charset` = 'utf8' and `Collation` = 'utf8_bin'

2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test plain returns' AS CHAR(60)) AS anon_1

2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test unicode returns' AS CHAR(60)) AS anon_1

2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test collated returns' AS CHAR CHARACTER SET utf8) COLLATE 
utf8_bin AS anon_1

2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,415 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine INSERT INTO 
timesheet (user_id, project_id) VALUES (%s, %s)

2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine (1, 1)
2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine COMMIT
Traceback (most recent call last):
   File "/home/mike/projects/sandbox/box7.py", line 21, in 
 print t1.date
   File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", 
line 237, in __get__

 return self.impl.get(instance_state(instance), dict_)
   File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", 
line 578, in get

 value = state._load_expired(state, passive)
   File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/state.py", 
line 474, in _load_expired

 self.manager.deferred_scalar_loader(self, toload)
   File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", 
line 669, in load_scalar_attributes

 raise orm_exc.ObjectDeletedError(state)
sqlalchemy.orm.exc.ObjectDeletedError: Instance '0x7f9ad5931d50>' has been deleted, or its row is otherwise not present.

2017-05-15 11:46:09,419 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2017-05-15 11:46:09,420 INFO sqlalchemy.engine.base.Engine SELECT 
timesheet.user_id AS timesheet_user_id, timesheet.project_id AS 
timesheet_project_id, timesheet.date AS timesheet_date

FROM timesheet
WHERE timesheet.user_id = %s AND timesheet.project_id = %s AND 
timesheet.date IS NULL

2017-05-15 11:46:09,420 INFO sqlalchemy.engine.base.Engine (1, 1)
|


--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and 
Verifiable Example. See http://stackoverflow.com/help/mcve for 

[sqlalchemy] refresh's lockmode and with_for_update

2017-05-15 Thread Zsolt Ero
Right now, the documentation for session.refresh() mentions:

lockmode – Passed to the Query 

 
as used by with_lockmode() 

.

Where clicking on with_lockmode() links to the following note:

Deprecated since version 0.9.0: superseded by Query.with_for_update() 

.

My questions are:
1. How should I use refresh with specific update modes? Should I just 
discard that depreciation note?

2. Are there plans to support all 4 update modes of recent PostgreSQL's? 
https://www.postgresql.org/docs/9.6/static/explicit-locking.html
In which case, would it be simpler to just use the DB supplied names 
instead of trying to encode and decode it into boolean parameters to a 
general function? I find that the documentation of PostgreSQL's mode is 
already quite complicated and definitely needs to be properly read by 
anyone trying to use one, so trying to hide it behind a generic function 
might just lead to confusion, in my opinion.

Why not just make those specific modes an imports like other specifics 
already are. 

from sqlalchemy.dialects.postgresql import lock_key_no_update
q = sess.query(User).with_for_update(lock_key_no_update)
and
session.refresh(instance, lockmode= lock_key_no_update))

Just my idea

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] complex primary key with server_default timestamp

2017-05-15 Thread mdob
Just curious. Let's say we have a complex primary key of user_id (integer), 
project_id (integer) and date (timestamp). After adding and committing we 
don't have the PK and we won't be able to update it. Is that right?

If it was auto-increment integer then it would probably be fine. PK would 
be fetched using last_inster_id() in mysql or similar method in other 
dialects.

from sqlalchemy import create_engine, Column, Integer, TIMESTAMP, 
FetchedValue
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

Base = declarative_base()

engine = 
create_engine('mysql+mysqldb://root:@192.168.1.12:3306/test?charset=utf8', 
echo=True)

class Timesheet(Base):
__tablename__ = 'timesheet'
user_id = Column(Integer, primary_key=True)
project_id = Column(Integer, primary_key=True)
date = Column(TIMESTAMP(), primary_key=True, nullable=False, 
server_default=FetchedValue())


session = sessionmaker(engine)()
t1 = Timesheet(user_id=1, project_id=1)
session.add(t1)
session.commit()

print t1.date


2017-05-15 11:46:09,411 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES 
LIKE 'sql_mode'
2017-05-15 11:46:09,411 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine SELECT DATABASE()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,412 INFO sqlalchemy.engine.base.Engine show collation 
where `Charset` = 'utf8' and `Collation` = 'utf8_bin'
2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test plain returns' AS CHAR(60)) AS anon_1
2017-05-15 11:46:09,413 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test unicode returns' AS CHAR(60)) AS anon_1
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test collated returns' AS CHAR CHARACTER SET utf8) COLLATE utf8_bin 
AS anon_1
2017-05-15 11:46:09,414 INFO sqlalchemy.engine.base.Engine ()
2017-05-15 11:46:09,415 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine INSERT INTO 
timesheet (user_id, project_id) VALUES (%s, %s)
2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine (1, 1)
2017-05-15 11:46:09,416 INFO sqlalchemy.engine.base.Engine COMMIT
Traceback (most recent call last):
  File "/home/mike/projects/sandbox/box7.py", line 21, in 
print t1.date
  File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py",
 
line 237, in __get__
return self.impl.get(instance_state(instance), dict_)
  File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py",
 
line 578, in get
value = state._load_expired(state, passive)
  File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/state.py",
 
line 474, in _load_expired
self.manager.deferred_scalar_loader(self, toload)
  File 
"/home/mike/envs/slashdb9/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py",
 
line 669, in load_scalar_attributes
raise orm_exc.ObjectDeletedError(state)
sqlalchemy.orm.exc.ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not present.
2017-05-15 11:46:09,419 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2017-05-15 11:46:09,420 INFO sqlalchemy.engine.base.Engine SELECT 
timesheet.user_id AS timesheet_user_id, timesheet.project_id AS 
timesheet_project_id, timesheet.date AS timesheet_date 
FROM timesheet 
WHERE timesheet.user_id = %s AND timesheet.project_id = %s AND 
timesheet.date IS NULL
2017-05-15 11:46:09,420 INFO sqlalchemy.engine.base.Engine (1, 1)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.