[sqlalchemy] Re: Joinedload from child to parent in a joined table relationship

2020-11-03 Thread Alex Collins
Thanks so much! I was interpreting the parenthesis as a subquery. Been 
banging my head against the wall thinking this was SQLAlchemy nonsense for 
quite a while and it was just a simple misundertstanding of syntax. Now, 
I've got the information I should be able to fix this within the far 
messier application code. 
On Tuesday, November 3, 2020 at 8:57:18 AM UTC-3:30 Alex Collins wrote:

> Trying to configure a set of relationships to all be joined loaded and the 
> particular relationship structure doesn’t seem to want to join. I have a 
> one-to-many relationship where the many is the child in joined table 
> inheritance. The foreign key to my source table is on the polymorphic child 
> table. But, however I configure the relationship it does a subquery instead 
> of a joined load on the parent class.
>
> Built a test application as a demonstration. What I want is to have the 
> script below function the same way but, the query at the end outputs a 
> joinedload for PolyParent instead of a subquery. 
>
> from sqlalchemy import Column, ForeignKey, Integer, Text, create_engine
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import relationship, sessionmaker
>
> Base = declarative_base()
>
> class PolyParent(Base):
> __tablename__ = "poly_parent"
> id = Column(Integer, primary_key=True)
> type = Column(Text)
> __mapper_args__ = {"polymorphic_identity": "poly_parent", 
> "polymorphic_on": type}
>
> class PolyChild(PolyParent):
> __tablename__ = "poly_child"
> id = Column(Integer, ForeignKey("poly_parent.id"), primary_key=True)
> parent_id = Column(Integer, ForeignKey("source.id"))
> __mapper_args__ = {"polymorphic_identity": "poly_child"}
>
> class Source(Base):
> __tablename__ = "source"
> id = Column(Integer, primary_key=True)
> children = relationship(PolyChild)
>
> engine = create_engine("sqlite://")
> session = sessionmaker(bind=engine)()
> Base.metadata.create_all(bind=engine)
>
> print(session.query(Source).join(Source.children))
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/dad45bfd-a764-4354-8b6c-fe5ce58db925n%40googlegroups.com.


[sqlalchemy] Joinedload from child to parent in a joined table relationship

2020-11-03 Thread Alex Collins
Trying to configure a set of relationships to all be joined loaded and the 
particular relationship structure doesn’t seem to want to join. I have a 
one-to-many relationship where the many is the child in joined table 
inheritance. The foreign key to my source table is on the polymorphic child 
table. But, however I configure the relationship it does a subquery instead 
of a joined load on the parent class.

Built a test application as a demonstration. What I want is to have the 
script below function the same way but, the query at the end outputs a 
joinedload for PolyParent instead of a subquery. 

from sqlalchemy import Column, ForeignKey, Integer, Text, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker

Base = declarative_base()

class PolyParent(Base):
__tablename__ = "poly_parent"
id = Column(Integer, primary_key=True)
type = Column(Text)
__mapper_args__ = {"polymorphic_identity": "poly_parent", 
"polymorphic_on": type}

class PolyChild(PolyParent):
__tablename__ = "poly_child"
id = Column(Integer, ForeignKey("poly_parent.id"), primary_key=True)
parent_id = Column(Integer, ForeignKey("source.id"))
__mapper_args__ = {"polymorphic_identity": "poly_child"}

class Source(Base):
__tablename__ = "source"
id = Column(Integer, primary_key=True)
children = relationship(PolyChild)

engine = create_engine("sqlite://")
session = sessionmaker(bind=engine)()
Base.metadata.create_all(bind=engine)

print(session.query(Source).join(Source.children))

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/70b48614-d1c0-42f4-8792-ed96f181915dn%40googlegroups.com.


[sqlalchemy] Re: Custom secondary relation with composite primary keys

2020-04-26 Thread Alex Plugaru
Hi John,

Composite primary keys: id + account_id is for multi-tenancy and consistent 
and efficient indexes/joins. Multiple customers use the same postgres 
database and to the split of their data, the most efficient way that I know 
is to add a tenant id (in our case it's account_id) for each table.

Q: Why composite primary keys? Why not just primary key on id? 

Usually the cardinality of you tenant_id will be a lot smaller than the 
`id` of your table so that should make lookups much faster. Note that only 
works if you have the correct order in your constraint - meaning that 
tentant id should always be first: Example:  constraint your_table_pkey 
primary key (account_id, id) 

Q: Great, but why not a composite unique index?

That works too, primary key is just a unique index anyway behind the 
scenes, but it allows some tools (DataGrip SQL client for example) to work 
better by automatically generating the correct query for a join based on 
the primary/foreign key constraint for example.


Hope it helps, just remember that the order of columns in an index matters 
- if you know you're gonna have a lot of data it's not going to be easy to 
change a primary key on a table that already has a lot of data.


On Sunday, April 26, 2020 at 10:43:46 AM UTC-7, John Walker wrote:
>
> Hello Alex,
>
> This is super old, so I don't have a lot of hope.
> But I'm wondering if you could explain a line in your example text.
>
> I'm trying to figure out if I need a data model similar to yours, but I'm 
> not sure.
>
> Could you explain the datamodel/biz requirements reasoning behind this 
> quote "This is done for a few reasons, but let's say it's done to keep 
> everything consistent."
>
> I would kill to know what the "few reasons" are, it might help me big time.
>
> Thanks for any time/help.
> John
>
> On Friday, April 28, 2017 at 7:49:40 PM UTC-6, Alex Plugaru wrote:
>>
>> Hello, 
>>
>> There are 3 tables: `*Account*`, `*Role*`, `*User*`. Both `*Role*` and `
>> *User*` have a foreign key `*account_id*` that points to `*Account*`.
>>
>> A user can have multiple roles, hence the `*roles_users*` table which 
>> acts as the secondary relation table between `*Role*` and `*User*`.
>>
>> The `*Account*` table is a tenant table for our app, it is used to 
>> separate different customers.
>>
>> Note that all tables have (besides `*Account*`) have composite primary 
>> keys with `*account_id*`. This is done for a few reasons, but let's say 
>> it's done to keep everything consistent.
>>
>> Now if I have a simple secondary relationship (`*User.roles*` - the one 
>> that is commented out) all works as expected. Well kind of.. it throws a 
>> legitimate warning (though I believe it should be an error):
>>
>>
>> SAWarning: relationship 'User.roles' will copy column role.account_id to 
>> column roles_users.account_id, which conflicts with relationship(s): 
>> 'User.roles' (copies user.account_id to roles_users.account_id). Consider 
>> applying viewonly=True to read-only relationships, or provide a 
>> primaryjoin condition marking writable columns with the foreign() 
>> annotation.
>>
>> That's why I created the second relation `*User.roles*` - the one that 
>> is not commented out. Querying works as expected which has 2 conditions on 
>> join and everything. However I get this error when I try to save some roles 
>> on the user:
>>
>> sqlalchemy.orm.exc.UnmappedColumnError: Can't execute sync rule for 
>> source column 'roles_users.role_id'; mapper 'Mapper|User|user' does not 
>> map this column.  Try using an explicit `foreign_keys` collection which 
>> does not include destination column 'role.id' (or use a viewonly=True 
>> relation).
>>
>>
>> As far as I understand it, SA is not able to figure out how to save the 
>> secondary because it has a custom `*primaryjoin*` and `*secondaryjoin*` 
>> so it proposes to use `*viewonly=True*` which has the effect of just 
>> ignoring the roles relation when saving the model.
>>
>> The question is how to save the roles for a user without having to do it 
>> by hand (the example is commented out in the code). In the real app we have 
>> many secondary relationships and we're saving them in many places. It would 
>> be super hard to rewrite them all.
>>
>> Is there a solution to keep using `*User.roles = some_roles*` while 
>> keeping the custom `*primaryjoin*` and `*secondaryjoin*` below?
>>
>> The full example using SA 1.1.9:
>>
>>
>> from sqlalchemy import create_engine, Column, Integer, Text, Table, 
>> ForeignKeyConstraint, ForeignKey, and_
>> from sqlal

[sqlalchemy] Re: Correct and easy method to copy tables from MySQL to Oracle

2020-02-03 Thread Alex Hill
The answers provided by Gord Thompson are good however i've stumbled upon a 
few issues:

   - Working with Windows Subsystem Linux can be quite the hassle if you 
   have multiple partitions on your hard-drive, for example, I use the 
   Anaconda Distribution which is installed in my D:/ drive. So unless you 
   have some familiarity with WSL I would somewhat avoid it. 
   - If you insist on working with WSL, depending on the linux distribution 
   you have chosen you're going to need to update your python inside your WSL 
   and then install and IDE. If you will be using google search, you'll 
   eventually be lead to a tutorial that recommends installing Pycharm, 
   pycharm is a good IDE but you're going to have set the interpreter in its 
   settings, which again depends on how familiar you are with these things. 
   There is a WSL version interpreter available with the professional version 
   of pycharm.
   - Overall trying to work with WSL means looking up a lot of stuff on 
   google and is a process which is kind of resource intensive.
   - Working with virtual box is probably the best and easiest solution, I 
   personally don't have a laptop strong enough to make it work as fast and 
   responsive as I would like it to be.

Now, if you don't want install WSL or virtual box. Then the solution I used 
is this: 
https://www.slideshare.net/Stiivi/python-business-intelligence-pydata-2012-talk

Starting from slide 35 it provides an easy solution for copying a single 
table, I used it and I am satisfied. Just be careful when copying from 
Oracle to MySQL as their isn't a MySQL equivalent to the NUMBER data type. 
This solution requires sqlalchemy.


On Saturday, February 1, 2020 at 10:58:06 PM UTC+2, Alex Hill wrote:
>
> Hello everyone,
>
> I apologize if this question has been answered before, i've done some of 
> my of searching and solution testing before coming here.
>
> I've even created a thread on stackoverflow 
> https://stackoverflow.com/questions/60008289/copy-or-migrate-tables-from-mysql-database-to-oracle-11g-and-vice-versa-using-py
>  
> <https://www.google.com/url?q=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F60008289%2Fcopy-or-migrate-tables-from-mysql-database-to-oracle-11g-and-vice-versa-using-py=D=1=AFQjCNGx6fKWf22da4AgWDdT2yeLVOW6VA>
>
> I am using Python 3.6 & sqlalchemy 1.3.13, mysql workbench 8.0 and sql 
> developer vers.19
>
> Just like the title I would like to copy all the tables and their 
> properties from MySQL to Oracle. You can see the solutions I have used in 
> the link above after edit 1.
> Up till now none of them worked except pandas read_mysql and to_sql but is 
> it the right way to do it, does it really copy everything? is there anyway 
> to "automate" the process without having to specify the table I want to 
> copy?
>
> I wanted to use etlalchemy which builds upon sqlalchemy but it's not 
> available for windows (which i'm using)
>
> Thank you for your help, this project is related to school.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e7305456-a501-4082-9d87-24aecd2894f3%40googlegroups.com.


[sqlalchemy] Correct and easy method to copy tables from MySQL to Oracle

2020-02-01 Thread Alex Hill
Hello everyone,

I apologize if this question has been answered before, i've done some of my 
of searching and solution testing before coming here.

I've even created a thread on stackoverflow 
https://stackoverflow.com/questions/60008289/copy-or-migrate-tables-from-mysql-database-to-oracle-11g-and-vice-versa-using-py

I am using Python 3.6 & sqlalchemy 1.3.13, mysql workbench 8.0 and sql 
developer vers.19

Just like the title I would like to copy all the tables and their 
properties from MySQL to Oracle. You can see the solutions I have used in 
the link above after edit 1.
Up till now none of them worked except pandas read_mysql and to_sql but is 
it the right way to do it, does it really copy everything? is there anyway 
to "automate" the process without having to specify the table I want to 
copy?

I wanted to use etlalchemy which builds upon sqlalchemy but it's not 
available for windows (which i'm using)

Thank you for your help, this project is related to school.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/7dcc0874-9e52-4231-a0b5-55539ee111fb%40googlegroups.com.


[sqlalchemy] Receiving this error: sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('HYC00', u'[HYC00] [Microsoft][ODBC SQL Server Driver]Optional feature not implemented (0) (SQLBindParameter)')

2019-05-17 Thread Alex Net
Hello! I have been writing code that would allow me to read from an Excel 
sheet and writing it in a MS SQL Database. The code was functioning well, 
meaning it was running and writing all cells until it got to cell E,23 , 
where it crashes. Column E is called DueDate, and the first 22 cells print 
similarly to datetime when using nvarchar(max), date or datetime as the 
columns datatype. If I allow the program to read and write E1 - E22, it 
won't have a problem writing them to the database. It is when I include 
this cell that the code gives me that error. 

I have looked at all the resources online, yet still find no solution. 

I first ran this program in Python 3.7.3 and it gave me the same exact 
problem
I played around with the datatypes in my SQL create table query but that 
did not help. Nvarchar(max) didn't either.
Then after reading about some similar issues, I tried changing the drivers 
in my connection string (after installing them of course) to older drivers 
to no avail.
Next, I tried using a virtual environment in Visual Studio running multiple 
versions of python, including 2.7(32,64) again with no results. 

At first, I thought it was a problem with the read_excel function but 
Visual Studio hinted towards my To_SQL.

Code:


import pandas as pd
import sqlalchemy 
from sqlalchemy import create_engine
import pyodbc 



#Connection to SQL Server (pyodbc)
server = 'SERVERNAME'
db = 'DATABASENAME'
user = 'NAME'
pwd = 'PASSWORD'
driver = '{SQL Server Native Client 11.0}'

conn = pyodbc.connect(
'DRIVER='+ driver + ';SERVER='+ server + ';DATABASE=' + db + ';UID=' + 
user + ';PWD=' + pwd )

#Connection to SQL Server 2(SQLAlchemy)
engine = create_engine(
'mssql+pyodbc://NAME:PASSWORD@SERVERNAME/DATABASENAME?driver=SQL+Server')


#Query To Create Table
def make_table(SQLquery):
query1 = SQLquery
cur = conn.cursor()
cur.execute(query1)
conn.commit()

#Extract from Excel
def read_sheet(filePath,sheetName,columns='A:Z', rows=None,skrows=None):
global df
file = filePath
sheet = sheetName
cols = columns
rows1 = rows
rows2 = skrows
df = pd.read_excel(file, sheet, usecols= cols,index_col=0,nrows=rows1,
skiprows=rows2)
return df
print('Reading Completed. Writing now...')

#Import to SQL Server
def write_to_sql(tableName, chunks=1000):
col_name= 'E'
df.to_sql(tableName, engine, if_exists= 'append', chunksize=chunks,dtype
={col_name:DATETIME for col_name in df})
print('Writing Complete!')

make_table(
'''
CREATE TABLE test4 (Something1 nvarchar(max), Something2 nvarchar(max), 
Something3 nvarchar(max), Something4 varchar(max), DueDate nvarchar(max),   
 Something5 int, Something6 nvarchar(max), Something7 nvarchar(max), 
Something8 nvarchar(max), Notes nvarchar(max) )

''')

read_sheet('C:/Users/USER/Documents/ExcelToSQL/SAMPLE.xlsx','SHEET',columns=
'E',rows=23)


write_to_sql('test')



conn.close()



Full Error & Trace:

Traceback (most recent call last):
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\ptvsd_launcher.py",
 
line 119, in 
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\debugger.py",
 
line 37, in debug
run(address, filename, *args, **kwargs)
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_local.py",
 
line 79, in run_file
run(argv, addr, **kwargs)
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_local.py",
 
line 140, in _run
_pydevd.main()
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_vendored\pydevd\pydevd.py",
 
line 1925, in main
debugger.connect(host, port)
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_vendored\pydevd\pydevd.py",
 
line 1283, in run
return self._exec(is_module, entry_point_fn, module_name, file, 
globals, locals)
  File "c:\program files (x86)\microsoft visual 
studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_vendored\pydevd\pydevd.py",
 
line 1290, in _exec
pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Users\USER\Documents\ExcelToSQL\main2.py", line 69, in 
write_to_sql('test4')
  File "C:\Users\USER\Documents\ExcelToSQL\main2.py", line 57, in 
write_to_sql
df.to_sql(tableName, engine, if_exists= 'append', 
chunksize=chunks,dtype={col_name:DATETIME for col_name in df})
  File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 2531, 
in to_sql
dtype=dtype, method=method)
  File "C:\Python27\lib\site-packages\pandas\io\sql.py", 

Re: [sqlalchemy] Unexplained SELECT Being Issued

2018-12-01 Thread Alex Rothberg
Makes sense. The code posted was a stripped down example of my issue. What 
I was seeing was an integrity error caused by the autoflush of that load:

s = Session(e) 

e = Employee(id=1)
s.add(e)

s.flush()

er = EmployeeRecord() # there are other attributes to EmployeeRecord
# assume this is caused by another attribute on EmployeeRecord:
s.add(er)

# this then blows up:
e.records = [er]


On Saturday, December 1, 2018 at 10:08:08 PM UTC-5, Mike Bayer wrote:
>
> On Sat, Dec 1, 2018 at 9:55 PM Mike Bayer  > wrote: 
> > 
> > On Sat, Dec 1, 2018 at 9:21 PM Alex Rothberg  > wrote: 
> > > 
> > > I set up the DB: 
> > > from sqlalchemy import * 
> > > from sqlalchemy.orm import * 
> > > from sqlalchemy.ext.declarative import declarative_base 
> > > 
> > > 
> > > Base = declarative_base() 
> > > 
> > > class Employee(Base): 
> > > __tablename__ = 'employee' 
> > > id = Column(Integer, primary_key=True) 
> > > 
> > > class EmployeeRecord(Base): 
> > > __tablename__ = 'employee_record' 
> > > 
> > > employee_id = Column(Integer, ForeignKey(Employee.id), 
> primary_key=True) 
> > > 
> > > 
> > > employee = relationship( 
> > > Employee, 
> > > viewonly=True, 
> > > backref=backref("records", passive_deletes="all",), 
> > > passive_deletes="all", 
> > > ) 
> > > 
> > > e = create_engine("postgresql://localhost/test_issue2", echo=True) 
> > > 
> > > Base.metadata.drop_all(e) 
> > > Base.metadata.create_all(e) 
> > > 
> > > and then: 
> > > s = Session(e) 
> > > 
> > > e = Employee(id=1) 
> > > s.add(e) 
> > > 
> > > s.flush() 
> > > 
> > > print("") 
> > > e.records.clear() 
> > > 
> > > and I see: 
> > > 
> > >  2018-12-01 21:16:13,608 INFO sqlalchemy.engine.base.Engine SELECT 
> employee_record.employee_id AS employee_record_employee_id FROM 
> employee_record WHERE %(param_1)s = employee_record.employee_id 
> > > 
> > > I don't understand why that SELECT is needed given the passive_deletes 
> being set. 
> > 
> > passive_deletes=all is not used here because you have no cascade 
> > delete set on Employee.records.  passive_deletes only takes effect for 
> > a cascaded delete when you were to mark the parent Employee as deleted 
> > - this is because databases have ON DELETE CASCADE features that can 
> > do the delete for us. 
>
> slight correction, with passive_deletes=all, you don't have to have 
> any other "cascade" settings on the relationship - the "all" setting 
> refers to the nulling out of foreign key columns that would normally 
> occur when you deleted the Employee without specifying any cascade to 
> the child objects.   but this is still a flag that only applies to the 
> case of the parent object being deleted.   an actual access to a 
> collection, even to remove all the items from it, is always going to 
> need to know what objects were in that collection, at the very least 
> to handle backref events. 
>
>
> > 
> > In this case, you are directly removing the records from the Employee, 
> > which means you would like to emit  UPDATE statements for each of 
> > those records marking their foreign key attribute to NULL. SQLAlchemy 
> > needs to know all the identities for this operation so the list is 
> > loaded.  As it turns out, there are no records, but the ORM didn't 
> > know that because the attribute was not initialized (they may have 
> > been persisted separately, such as, if you added individual 
> > EmployeeRecord() objects with the foreign key of that Employee. 
> > 
> > Otherwise it seems like you are expecting that "e.records.clear()" 
> > would do absolutely nothing, in which case, why are you calling 
> > "e.records.clear()".   If you want to avoid the SELECT in this very 
> > specific case, set the list to [] when you first persist the object. 
> > 
> > 
> > 
> > > 
> > > -- 
> > > SQLAlchemy - 
> > > The Python SQL Toolkit and Object Relational Mapper 
> > > 
> > > http://www.sqlalchemy.org/ 
> > > 
> > > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > > --- 
> > > You received this message because you

[sqlalchemy] Unexplained SELECT Being Issued

2018-12-01 Thread Alex Rothberg
I set up the DB:
from sqlalchemy import * 
from sqlalchemy.orm import * 
from sqlalchemy.ext.declarative import declarative_base 


Base = declarative_base() 

class Employee(Base): 
__tablename__ = 'employee' 
id = Column(Integer, primary_key=True)  

class EmployeeRecord(Base):
__tablename__ = 'employee_record' 

employee_id = Column(Integer, ForeignKey(Employee.id), primary_key=True)


employee = relationship(
Employee,
viewonly=True,  
backref=backref("records", passive_deletes="all",),
passive_deletes="all",
)

e = create_engine("postgresql://localhost/test_issue2", echo=True) 

Base.metadata.drop_all(e) 
Base.metadata.create_all(e) 

and then:
s = Session(e) 

e = Employee(id=1)
s.add(e)

s.flush()

print("")
e.records.clear()

and I see:

 2018-12-01 21:16:13,608 INFO sqlalchemy.engine.base.Engine SELECT 
employee_record.employee_id AS employee_record_employee_id FROM employee_record 
WHERE %(param_1)s = employee_record.employee_id

I don't understand why that SELECT is needed given the passive_deletes being 
set.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How should I eagerly load a property from multiple levels of a self-referential table?

2018-10-26 Thread Alex Wang
Ah, ok. I thought defaultload() meant use whatever was originally specified 
in the relationship(). That helps a lot!

On Friday, October 26, 2018 at 12:34:53 PM UTC-4, Mike Bayer wrote:
>
> On Fri, Oct 26, 2018 at 12:10 PM Alex Wang  > wrote: 
> > 
> > I ended up needing one more layer of selectinload(Node.direct_children), 
> but it got rid of all the lazy loads. Thanks for the suggestion; that's 
> definitely a lot better than what I initially had. I was thinking that I 
> needed to specify eager loading separately, but now that I think more about 
> your suggestion the redundancy becomes clearer. 
> > 
> > There's one more layer of complications --- if I need to get multiple 
> entries off of each node, is there a clean-ish way of eagerly loading those 
> attributes in addition to the entries mentioned in my original post? Or 
> should I just write a function like what you suggested and use that? 
>
> if you want to keep specifying the same path and add more attributes, 
> defaultload() helps with that: 
>
> selectinload(Foo.bar).selectinload(Bar.attribute1) 
> defaultload(Foo.bar).selectinload(Bar.attribute2) 
> defaultload(Foo.bar).selectinload(Bar.attribute3) 
>
> above, the "defaultload" just means, "here is a token in the path but 
> don't change the existing loading options on this token". 
>
>
>
>
>
> > 
> > And just to make sure, selectinloads should be the right choice for this 
> kind of nested collection, right? 
> > 
> > Thanks! 
> > 
> > On Thursday, October 25, 2018 at 6:54:53 PM UTC-4, Mike Bayer wrote: 
> >> 
> >> On Thu, Oct 25, 2018 at 4:21 PM Alex Wang  wrote: 
> >> > 
> >> > Hi all! 
> >> > 
> >> > I'm trying to write a small script to interface with a database 
> controlled by a third-party application, and I'm not sure the way I set up 
> eager loading is right. 
> >> > 
> >> > The code I have looks something like the following: 
> >> > 
> >> > from sqlalchemy.ext.declarative import declarative_base 
> >> > from sqlalchemy.dialects.mssql import FLOAT, NVARCHAR, 
> UNIQUEIDENTIFIER 
> >> > from sqlalchemy.orm import foreign, relationship, remote 
> >> > from sqlalchemy.schema import Column 
> >> > Base = declarative_base() 
> >> > class Node(Base): 
> >> > NodeID = Column(NVARCHAR(length=65), primary_key=True, 
> nullable=False) 
> >> > ParentNodeID = Column(NVARCHAR(length=65), nullable=True) 
> >> > direct_children = relationship('Node', primaryjoin=(NodeID == 
> remote(foreign(ParentNodeID 
> >> > entries = relationship('Entry', primaryjoin=('Node.NodeID == 
> remote(foreign(Entry.NodeID))') 
> >> > 
> >> > def node_and_all_children(self): 
> >> > result = [self] 
> >> > for child in self.direct_children: 
> >> > result += child.node_and_all_children() 
> >> > return result 
> >> > 
> >> > def cost(self): 
> >> > entries = [e for p in self.node_and_all_children() for e in 
> p.entries] 
> >> > return sum(e.Value1 * e.Value2 for e in entries) 
> >> > 
> >> > class Entry(Base): 
> >> > TEID = Column(UNIQUEIDENTIFIER, primary_key=True, nullable=False) 
> >> > NodeID = Column(NVARCHAR(length=65), nullable=False) 
> >> > Value1 = Column(FLOAT, nullable=True) 
> >> > Value2 = Column(FLOAT, nullable=True) 
> >> > 
> >> > I want to write something like this: 
> >> > 
> >> > def get_costs(session, node_ids: List[str]): 
> >> > nodes = 
> session.query(Node).filter(Node.NodeID.in_(node_ids)).all() 
> >> > return {n.NodeID: n.cost() for n in nodes} 
> >> > 
> >> > From what I understand, this results in an N+1-ish access pattern. I 
> can eagerly load the children easily enough (I think I can guarantee <= 3 
> levels of children, so three selectinload() calls are hopefully enough?): 
> >> > 
> >> > def get_costs(session, node_ids: List[str]): 
> >> > nodes = ( 
> >> > session.query(Node) 
> >> > .options( 
> >> > selectinload(Node.direct_children) 
> >> > .selectinload(Node.direct_children) 
> >> > .selectinload(Node.direct_children) 
> >> > ) 
> >> > .filter(Node.NodeID.in_(n

Re: [sqlalchemy] How should I eagerly load a property from multiple levels of a self-referential table?

2018-10-26 Thread Alex Wang
I ended up needing one more layer of selectinload(Node.direct_children), 
but it got rid of all the lazy loads. Thanks for the suggestion; that's 
definitely a lot better than what I initially had. I was thinking that I 
needed to specify eager loading separately, but now that I think more about 
your suggestion the redundancy becomes clearer.

There's one more layer of complications --- if I need to get multiple 
entries off of each node, is there a clean-ish way of eagerly loading those 
attributes in addition to the entries mentioned in my original post? Or 
should I just write a function like what you suggested and use that?

And just to make sure, selectinloads should be the right choice for this 
kind of nested collection, right?

Thanks!

On Thursday, October 25, 2018 at 6:54:53 PM UTC-4, Mike Bayer wrote:
>
> On Thu, Oct 25, 2018 at 4:21 PM Alex Wang > 
> wrote: 
> > 
> > Hi all! 
> > 
> > I'm trying to write a small script to interface with a database 
> controlled by a third-party application, and I'm not sure the way I set up 
> eager loading is right. 
> > 
> > The code I have looks something like the following: 
> > 
> > from sqlalchemy.ext.declarative import declarative_base 
> > from sqlalchemy.dialects.mssql import FLOAT, NVARCHAR, UNIQUEIDENTIFIER 
> > from sqlalchemy.orm import foreign, relationship, remote 
> > from sqlalchemy.schema import Column 
> > Base = declarative_base() 
> > class Node(Base): 
> > NodeID = Column(NVARCHAR(length=65), primary_key=True, 
> nullable=False) 
> > ParentNodeID = Column(NVARCHAR(length=65), nullable=True) 
> > direct_children = relationship('Node', primaryjoin=(NodeID == 
> remote(foreign(ParentNodeID 
> > entries = relationship('Entry', primaryjoin=('Node.NodeID == 
> remote(foreign(Entry.NodeID))') 
> > 
> > def node_and_all_children(self): 
> > result = [self] 
> > for child in self.direct_children: 
> > result += child.node_and_all_children() 
> > return result 
> > 
> > def cost(self): 
> > entries = [e for p in self.node_and_all_children() for e in 
> p.entries] 
> > return sum(e.Value1 * e.Value2 for e in entries) 
> > 
> > class Entry(Base): 
> > TEID = Column(UNIQUEIDENTIFIER, primary_key=True, nullable=False) 
> > NodeID = Column(NVARCHAR(length=65), nullable=False) 
> > Value1 = Column(FLOAT, nullable=True) 
> > Value2 = Column(FLOAT, nullable=True) 
> > 
> > I want to write something like this: 
> > 
> > def get_costs(session, node_ids: List[str]): 
> > nodes = session.query(Node).filter(Node.NodeID.in_(node_ids)).all() 
> > return {n.NodeID: n.cost() for n in nodes} 
> > 
> > From what I understand, this results in an N+1-ish access pattern. I can 
> eagerly load the children easily enough (I think I can guarantee <= 3 
> levels of children, so three selectinload() calls are hopefully enough?): 
> > 
> > def get_costs(session, node_ids: List[str]): 
> > nodes = ( 
> > session.query(Node) 
> > .options( 
> > selectinload(Node.direct_children) 
> > .selectinload(Node.direct_children) 
> > .selectinload(Node.direct_children) 
> > ) 
> > .filter(Node.NodeID.in_(node_ids)) 
> > .all() 
> > ) 
> > return {n.NodeID: n.cost() for n in nodes} 
> > 
> > It's here that I get stuck, though. This eagerly loads the children, but 
> doesn't eagerly load each node's entries, which results in a query being 
> sent to the database for each child. I can specify that entries should be 
> eagerly loaded at each level: 
> > 
> > def get_costs(session, node_ids: List[str]): 
> > nodes = ( 
> > session.query(Node) 
> > .options( 
> > selectinload(Node.direct_children) 
> > .selectinload(Node.direct_children) 
> > .selectinload(Node.direct_children), 
> > selectinload(Node.entries), 
> > 
> selectinload(Node.direct_children).selectinload(Node.entries), 
> > 
> selectinload(Node.direct_children).selectinload(Node.direct_children).selectinload(Node.entries)
>  
>
> > ) 
> > .filter(Node.NodeID.in_(node_ids)) 
> > .all() 
> > ) 
> > return {n.NodeID: n.cost() for n in nodes} 
> > 
> > But this feels pretty gross, and I'm hoping there is a better way. 
>
> you don't have to specify selectinload() twice like that, you can say: 
>
> selectinload(

[sqlalchemy] How should I eagerly load a property from multiple levels of a self-referential table?

2018-10-25 Thread Alex Wang
Hi all!

I'm trying to write a small script to interface with a database controlled 
by a third-party application, and I'm not sure the way I set up eager 
loading is right.

The code I have looks something like the following:

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.mssql import FLOAT, NVARCHAR, UNIQUEIDENTIFIER
from sqlalchemy.orm import foreign, relationship, remote
from sqlalchemy.schema import Column
Base = declarative_base()
class Node(Base):
NodeID = Column(NVARCHAR(length=65), primary_key=True, nullable=False)
ParentNodeID = Column(NVARCHAR(length=65), nullable=True)
direct_children = relationship('Node', primaryjoin=(NodeID == remote(
foreign(ParentNodeID
entries = relationship('Entry', primaryjoin=('Node.NodeID == 
remote(foreign(Entry.NodeID))')

def node_and_all_children(self):
result = [self]
for child in self.direct_children:
result += child.node_and_all_children()
return result

def cost(self):
entries = [e for p in self.node_and_all_children() for e in p.
entries]
return sum(e.Value1 * e.Value2 for e in entries)

class Entry(Base):
TEID = Column(UNIQUEIDENTIFIER, primary_key=True, nullable=False)
NodeID = Column(NVARCHAR(length=65), nullable=False)
Value1 = Column(FLOAT, nullable=True)
Value2 = Column(FLOAT, nullable=True)

I want to write something like this:

def get_costs(session, node_ids: List[str]):
nodes = session.query(Node).filter(Node.NodeID.in_(node_ids)).all()
return {n.NodeID: n.cost() for n in nodes}

>From what I understand, this results in an N+1-ish access pattern. I can 
eagerly load the children easily enough (I think I can guarantee <= 3 
levels of children, so three selectinload() calls are hopefully enough?):

def get_costs(session, node_ids: List[str]):
nodes = (
session.query(Node)
.options(
selectinload(Node.direct_children)
.selectinload(Node.direct_children)
.selectinload(Node.direct_children)
)
.filter(Node.NodeID.in_(node_ids))
.all()
)
return {n.NodeID: n.cost() for n in nodes}

It's here that I get stuck, though. This eagerly loads the children, but 
doesn't eagerly load each node's entries, which results in a query being 
sent to the database for each child. I can specify that entries should be 
eagerly loaded at each level:

def get_costs(session, node_ids: List[str]):
nodes = (
session.query(Node)
.options(
selectinload(Node.direct_children)
.selectinload(Node.direct_children)
.selectinload(Node.direct_children),
selectinload(Node.entries),
selectinload(Node.direct_children).selectinload(Node.entries),
selectinload(Node.direct_children).selectinload(Node.
direct_children).selectinload(Node.entries)
)
.filter(Node.NodeID.in_(node_ids))
.all()
)
return {n.NodeID: n.cost() for n in nodes}

But this feels pretty gross, and I'm hoping there is a better way.

Is there a cleaner way of specifying that entries should be eagerly loaded 
for each level of children? Or is there a totally different query structure 
that would be even better?

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Creating Sub Object with out inserting into Base Table

2018-10-23 Thread Alex Rothberg
I have added a new sub class to my model hierarchy. I would like to 
instantiate it however there will be cases where the base object / row 
already exists. I tried to solve this by passing in the user object to the 
StaffUser but it looks like sqla still tried to INSERT into the User table 
leading to an unique constraint violation. My models are:

class User(db.Model):
id = db.Column(UUID, default=uuid.uuid4, primary_key=True)

class StaffUser(User):
id = db.Column(UUID, db.ForeignKey(User.id), primary_key=True)

user = db.relationship(User

is there anyway to tell sqla to re-use an existing base instance (ie to not 
attempt an insert)?

user = User.query.get(id)
staff_user = StaffUser(user=user)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-11 Thread Alex Rothberg

>
> given how this model is,  I would think you would want just all normal 
> relationships and whichever one you happen to mutate is the one that 
> sets the foreign keys.   because you might want to set 
> Employee.department alone or Employee.title which gives you department 
> also. "overlaps" here might want to actually assert the two FK 
> settings aren't conflicting.   Otherwise if you set 
> Employee.department = d1 and Employee.title =Title(department=d2), 
> it's random which one "wins". 

So the issue comes up when setting any of the relationships to None. For 
example if I cease to have all of fund, department and title, then the  
FundTitle is None. If i assign that to the Employee it then clears all of 
the other (overlapping) fks.

On Wednesday, October 10, 2018 at 8:28:39 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Oct 10, 2018 at 7:54 PM Alex Rothberg  > wrote: 
> > 
> > I'm not totally sure how "overlaps" are used in that example, but yes 
> that might be fine to have viewonly=False (ie default) and then mark what 
> is and isn't overlapped. 
> > 
> > So here is the full model with some color: 
> > 
> > Employee (all nullable [slight change from example above]): 
> >  - department_id 
> >  - title_id 
> >  - fund_id 
> > 
> > with the fks as: 
> > department_id -> Department 
> > fund_id -> Fund 
> > (department_id, title_id) -> Title 
> > (department_id, fund_id) -> FundDepartment # not shown in code snipped 
> earlier, but I also have this too ;-) 
> > (department_id, title_id, fund_id) -> FundTitle 
> > 
> > relationships setup the best I can to avoid overlaps, etc. 
> > 
> > 
> > An employee may have just a fund assigned, just a department, a 
> department and a title, a department and fund or a department, title and a 
> fund. 
>
> so...the columns are all nullable and that means the Employee should 
> be flushable before the FundTitle? 
>
>
> > Further I want to keep track of the department_id on the title (ie a 
> title belongs to a department). I want to make sure that the department_id 
> on the employee matches the department_id on the title,  hence the 
> potentially extraneous composite fk (ie I could just fk from Employee to 
> title but then there is no constraint that the department matches; an fk 
> from the title to department does not ensure that). I actually use this 
> pattern quite a bit with tenancy throughout my models (ie where I use a 
> composite fk of the standard pk + the tenent to ensure at the db level that 
> the tenant matches between the two models).> 
> > Let met know if something seems totally silly here! 
>
> given how this model is,  I would think you would want just all normal 
> relationships and whichever one you happen to mutate is the one that 
> sets the foreign keys.   because you might want to set 
> Employee.department alone or Employee.title which gives you department 
> also. "overlaps" here might want to actually assert the two FK 
> settings aren't conflicting.   Otherwise if you set 
> Employee.department = d1 and Employee.title =Title(department=d2), 
> it's random which one "wins". 
>
> this is not a use case that's ever been considered. 
>
>
>
>
> > 
> > On Wednesday, October 10, 2018 at 6:12:59 PM UTC-4, Mike Bayer wrote: 
> >> 
> >> for example why don't we like just using plain relationship() without 
> >> the viewonly=True?   Shouldn't you be explicitly associating FundTitle 
> >> with Employee in any case?that is: 
> >> 
> >> class Employee(Base): 
> >> __tablename__ = 'employee' 
> >> id = Column(Integer, primary_key=True) 
> >> title_id = Column(ForeignKey('title.id'), nullable=False) 
> >> department_id = Column(ForeignKey('department.id'), 
> nullable=False) 
> >> fund_id = Column(ForeignKey('fund.id'), nullable=False) 
> >> 
> >> department = relationship(lambda: Department) 
> >> title = relationship("Title") 
> >> fund = relationship("Fund") 
> >> 
> >> fund_title = relationship(FundTitle) 
> >> 
> >> __table_args__ = ( 
> >> ForeignKeyConstraint( 
> >> (title_id, department_id, fund_id), 
> >> (FundTitle.title_id, FundTitle.department_id, 
> FundTitle.fund_id) 
> >> ), 
> >> ) 
> >> 
> >> 
> >> and then: 
> >> 
> >> for i in range(5): 
> >> d1 = Department() 
> >> t1 = Title

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
I'm not totally sure how "overlaps" are used in that example, but yes that 
might be fine to have viewonly=False (ie default) and then mark what is and 
isn't overlapped.

So here is the full model with some color:

Employee (all nullable [slight change from example above]):
 - department_id
 - title_id
 - fund_id

with the fks as:
department_id -> Department
fund_id -> Fund
(department_id, title_id) -> Title
(department_id, fund_id) -> FundDepartment # not shown in code snipped 
earlier, but I also have this too ;-)
(department_id, title_id, fund_id) -> FundTitle

relationships setup the best I can to avoid overlaps, etc.


An employee may have just a fund assigned, just a department, a department 
and a title, a department and fund or a department, title and a fund. 
Further I want to keep track of the department_id on the title (ie a title 
belongs to a department). I want to make sure that the department_id on the 
employee matches the department_id on the title, hence the potentially 
extraneous composite fk (ie I could just fk from Employee to title but then 
there is no constraint that the department matches; an fk from the title to 
department does not ensure that). I actually use this pattern quite a bit 
with tenancy throughout my models (ie where I use a composite fk of the 
standard pk + the tenent to ensure at the db level that the tenant matches 
between the two models).

Let met know if something seems totally silly here!

On Wednesday, October 10, 2018 at 6:12:59 PM UTC-4, Mike Bayer wrote:
>
> for example why don't we like just using plain relationship() without 
> the viewonly=True?   Shouldn't you be explicitly associating FundTitle 
> with Employee in any case?that is: 
>
> class Employee(Base): 
> __tablename__ = 'employee' 
> id = Column(Integer, primary_key=True) 
> title_id = Column(ForeignKey('title.id'), nullable=False) 
> department_id = Column(ForeignKey('department.id'), nullable=False) 
> fund_id = Column(ForeignKey('fund.id'), nullable=False) 
>
> department = relationship(lambda: Department) 
> title = relationship("Title") 
> fund = relationship("Fund") 
>
> fund_title = relationship(FundTitle) 
>
> __table_args__ = ( 
> ForeignKeyConstraint( 
> (title_id, department_id, fund_id), 
> (FundTitle.title_id, FundTitle.department_id, 
> FundTitle.fund_id) 
> ), 
> ) 
>
>
> and then: 
>
> for i in range(5): 
> d1 = Department() 
> t1 = Title(department=d1) 
> f1 = Fund(department=d1, title=t1) 
> ft1 = FundTitle(title=t1, department=d1, fund=f1) 
>
> s.add_all([d1, t1, f1, ft1]) 
>
> e1 = Employee(title=t1, department=d1, fund=f1, fund_title=ft1) 
>
> there's still the warning you don't like, but then at least we can 
> make an optoin that is narrower in scope: 
>
> fund_title = relationship( 
> FundTitle, overlaps=('department', 'title', 'fund')) 
>
> e.g. we aren't saying viewonly=True but then still having the 
> relationship be related to the flush, nor are we making the claim that 
> fund_title doesn't populate the department_id, title_id, fund_id 
> columns because that seems to contradict what the relationship is 
> supposed to do.  at least with "overlaps" the intent of what you are 
> trying to do is clearer.   but im not really sure, because I'm still 
> not feeling like I fully understand the model you have.  normally 
> you'd have employee->fundtitle as the FK, and you would *not* have a 
> foreign key from Employee to Department, Title, Fund individually. 
> it would be like this: 
>
> class Employee(Base): 
> __tablename__ = 'employee' 
> id = Column(Integer, primary_key=True) 
> title_id = Column(nullable=False) 
> department_id = Column(nullable=False) 
> fund_id = Column(nullable=False) 
>
> department = association_proxy("fund_title", "department") 
> title = association_proxy("fund_title", "title") 
> fund = association_proxy("fund_title", "fund") 
>
> fund_title = relationship(FundTitle) 
>
> __table_args__ = ( 
> ForeignKeyConstraint( 
> (title_id, department_id, fund_id), 
> (FundTitle.title_id, FundTitle.department_id, 
> FundTitle.fund_id) 
> ), 
> ) 
>
>
> ft1 = FundTitle(title=t1, department=d1, fund=f1) 
> e1 = Employee(fund_title=ft1) 
>
> e.g. a simple association object pattern. I don't see what the 
> redundant foreign keys solves. 
>
>
>
>
> On Wed, Oct 10, 2018 at 5:48 PM Mike Bayer  > wrote: 
> > 
> > On Wed, Oct 10, 2018 at 5:22 PM 

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
Adding the passive delete fixes the raise load but adds yet another warning 
from sqla:

sqlalchemy/orm/relationships.py:1790: SAWarning: On Employee.
_ft_for_dependency, 'passive_deletes' is normally configured on one-to-many, 
one-to-one, many-to-many relationships only.

Looking at this:
@event.listens_for(Session, "before_flush") 
def _add_dep(session, context, objects): 
context.dependencies.update([ 
( 
unitofwork.SaveUpdateAll(context, inspect(FundTitle)), 
unitofwork.SaveUpdateAll(context, inspect(Employee)) 
) 
]) 

do I not have to mark one Model as dependent on the other? Or is that 
implied by the order of the list?

On Wednesday, October 10, 2018 at 1:36:09 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Oct 10, 2018 at 1:32 PM Alex Rothberg  > wrote: 
> > 
> > Well the other way doesn't quite work as if I mark none of the columns 
> as foreign in the primary join, sqla then assumes / guesses all of them 
> are. 
>
> that is the case, that is code that has changed a lot over the years 
> so it has a lot of baggage. 
>
>
>
> > 
> > Let me test with passive. 
> > 
> > On Wed, Oct 10, 2018, 13:30 Mike Bayer  > wrote: 
> >> 
> >> On Wed, Oct 10, 2018 at 1:27 PM Alex Rothberg  > wrote: 
> >> > 
> >> > And I'll reiterate, not worth doing it all from the original single 
> relationship (ie not needing to either add more relationships, have 
> warnings or use the more obscure feature you outlined)? Seems like that 
> would be cleaner in code. 
> >> 
> >> you mean take the viewonly=True off the existing relationship?  sure 
> >> you can do that.  but if you mutate the elements in that collection, 
> >> you can incur a change that is conflicting with the other objects. 
> >> that's why I suggested making the non-viewonly a private member, but 
> >> either way works. 
> >> 
> >> 
> >> > 
> >> > On Wed, Oct 10, 2018, 13:17 Mike Bayer  > wrote: 
> >> >> 
> >> >> the raise load issue is because without passive_deletes, it has to 
> >> >> load the collection to make sure those objects are all updated. 
> >> >> passive_deletes fixes, now you just have a warning.  or use the unit 
> >> >> of work recipe which is more direct. 
> >> >> On Wed, Oct 10, 2018 at 1:15 PM Alex Rothberg  > wrote: 
> >> >> > 
> >> >> > Not just for warning. Also this raise load issue. yes, i see that 
> I can't mark none. If I could though, that would be awesome since I think 
> it would solve this problem? I can test by setting one foreign and seeing 
> if that works. 
> >> >> > 
> >> >> > On Wednesday, October 10, 2018 at 1:13:32 PM UTC-4, Mike Bayer 
> wrote: 
> >> >> >> 
> >> >> >> On Wed, Oct 10, 2018 at 12:56 PM Alex Rothberg <
> agrot...@gmail.com> wrote: 
> >> >> >> > 
> >> >> >> > let me get that. in the meantime, what are your thoughts on 
> just removing the view only from the original relationship and then using 
> an explicit primary join where none of the columns are marked foreign? 
> Theoretically that should solve this problem, no? 
> >> >> >> 
> >> >> >> is this just for the warning?I don't think the relationship() 
> can 
> >> >> >> be set up with no columns marked as foreign, it takes that as a 
> cue 
> >> >> >> that it should figure out the "foreign" columns on its own. 
> >> >> >> 
> >> >> >> There's another way to make sure Employee is always dependent on 
> >> >> >> FundTitle but it's a little bit off-label. Add the dependency 
> you 
> >> >> >> want directly into the unit of work: 
> >> >> >> 
> >> >> >> from sqlalchemy.orm import unitofwork 
> >> >> >> from sqlalchemy import event 
> >> >> >> 
> >> >> >> 
> >> >> >> @event.listens_for(Session, "before_flush") 
> >> >> >> def _add_dep(session, context, objects): 
> >> >> >> context.dependencies.update([ 
> >> >> >> ( 
> >> >> >> unitofwork.SaveUpdateAll(context, 
> inspect(FundTitle)), 
> >> >> >> unitofwork.SaveUpdateAll(context, inspect(Employee)) 
> >> >> >> ) 
> >> >> 

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
Well the other way doesn't quite work as if I mark none of the columns as
foreign in the primary join, sqla then assumes / guesses all of them are.

Let me test with passive.

On Wed, Oct 10, 2018, 13:30 Mike Bayer  wrote:

> On Wed, Oct 10, 2018 at 1:27 PM Alex Rothberg 
> wrote:
> >
> > And I'll reiterate, not worth doing it all from the original single
> relationship (ie not needing to either add more relationships, have
> warnings or use the more obscure feature you outlined)? Seems like that
> would be cleaner in code.
>
> you mean take the viewonly=True off the existing relationship?  sure
> you can do that.  but if you mutate the elements in that collection,
> you can incur a change that is conflicting with the other objects.
> that's why I suggested making the non-viewonly a private member, but
> either way works.
>
>
> >
> > On Wed, Oct 10, 2018, 13:17 Mike Bayer  wrote:
> >>
> >> the raise load issue is because without passive_deletes, it has to
> >> load the collection to make sure those objects are all updated.
> >> passive_deletes fixes, now you just have a warning.  or use the unit
> >> of work recipe which is more direct.
> >> On Wed, Oct 10, 2018 at 1:15 PM Alex Rothberg 
> wrote:
> >> >
> >> > Not just for warning. Also this raise load issue. yes, i see that I
> can't mark none. If I could though, that would be awesome since I think it
> would solve this problem? I can test by setting one foreign and seeing if
> that works.
> >> >
> >> > On Wednesday, October 10, 2018 at 1:13:32 PM UTC-4, Mike Bayer wrote:
> >> >>
> >> >> On Wed, Oct 10, 2018 at 12:56 PM Alex Rothberg 
> wrote:
> >> >> >
> >> >> > let me get that. in the meantime, what are your thoughts on just
> removing the view only from the original relationship and then using an
> explicit primary join where none of the columns are marked foreign?
> Theoretically that should solve this problem, no?
> >> >>
> >> >> is this just for the warning?I don't think the relationship() can
> >> >> be set up with no columns marked as foreign, it takes that as a cue
> >> >> that it should figure out the "foreign" columns on its own.
> >> >>
> >> >> There's another way to make sure Employee is always dependent on
> >> >> FundTitle but it's a little bit off-label. Add the dependency you
> >> >> want directly into the unit of work:
> >> >>
> >> >> from sqlalchemy.orm import unitofwork
> >> >> from sqlalchemy import event
> >> >>
> >> >>
> >> >> @event.listens_for(Session, "before_flush")
> >> >> def _add_dep(session, context, objects):
> >> >> context.dependencies.update([
> >> >> (
> >> >> unitofwork.SaveUpdateAll(context, inspect(FundTitle)),
> >> >> unitofwork.SaveUpdateAll(context, inspect(Employee))
> >> >> )
> >> >> ])
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> >
> >> >> > On Wednesday, October 10, 2018 at 12:41:25 PM UTC-4, Alex Rothberg
> wrote:
> >> >> >>
> >> >> >> Is it possible to specific a non viewonly relationship in which I
> have a primary join specified in which none of the fk's are marked
> "foreign"? ie where I can mark the relationship dependancy but it wont set
> any columns? It looks like there may be some logic in sqla that assume all
> columns are fk if none are specified as foreign?
> >> >> >>
> >> >> >> On Wednesday, October 10, 2018 at 11:56:49 AM UTC-4, Alex
> Rothberg wrote:
> >> >> >>>
> >> >> >>> So one minor issue and one big issue with that solution:
> >> >> >>> 1. minor issue, I now get these: SAWarning: relationship ''
> will copy column to column , which conflicts with relationship(s):
> '
> >> >> >>> 2. major issue, I use raiseload("*") and now I start seeing:
> sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' is not
> available due to lazy='raise'
> >> >> >>>
> >> >> >>> On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer
> wrote:
> >> >> >>>>
> >> >> >>>> On 

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
And I'll reiterate, not worth doing it all from the original single
relationship (ie not needing to either add more relationships, have
warnings or use the more obscure feature you outlined)? Seems like that
would be cleaner in code.

On Wed, Oct 10, 2018, 13:17 Mike Bayer  wrote:

> the raise load issue is because without passive_deletes, it has to
> load the collection to make sure those objects are all updated.
> passive_deletes fixes, now you just have a warning.  or use the unit
> of work recipe which is more direct.
> On Wed, Oct 10, 2018 at 1:15 PM Alex Rothberg 
> wrote:
> >
> > Not just for warning. Also this raise load issue. yes, i see that I
> can't mark none. If I could though, that would be awesome since I think it
> would solve this problem? I can test by setting one foreign and seeing if
> that works.
> >
> > On Wednesday, October 10, 2018 at 1:13:32 PM UTC-4, Mike Bayer wrote:
> >>
> >> On Wed, Oct 10, 2018 at 12:56 PM Alex Rothberg 
> wrote:
> >> >
> >> > let me get that. in the meantime, what are your thoughts on just
> removing the view only from the original relationship and then using an
> explicit primary join where none of the columns are marked foreign?
> Theoretically that should solve this problem, no?
> >>
> >> is this just for the warning?I don't think the relationship() can
> >> be set up with no columns marked as foreign, it takes that as a cue
> >> that it should figure out the "foreign" columns on its own.
> >>
> >> There's another way to make sure Employee is always dependent on
> >> FundTitle but it's a little bit off-label. Add the dependency you
> >> want directly into the unit of work:
> >>
> >> from sqlalchemy.orm import unitofwork
> >> from sqlalchemy import event
> >>
> >>
> >> @event.listens_for(Session, "before_flush")
> >> def _add_dep(session, context, objects):
> >> context.dependencies.update([
> >> (
> >> unitofwork.SaveUpdateAll(context, inspect(FundTitle)),
> >> unitofwork.SaveUpdateAll(context, inspect(Employee))
> >> )
> >> ])
> >>
> >>
> >>
> >>
> >>
> >> >
> >> > On Wednesday, October 10, 2018 at 12:41:25 PM UTC-4, Alex Rothberg
> wrote:
> >> >>
> >> >> Is it possible to specific a non viewonly relationship in which I
> have a primary join specified in which none of the fk's are marked
> "foreign"? ie where I can mark the relationship dependancy but it wont set
> any columns? It looks like there may be some logic in sqla that assume all
> columns are fk if none are specified as foreign?
> >> >>
> >> >> On Wednesday, October 10, 2018 at 11:56:49 AM UTC-4, Alex Rothberg
> wrote:
> >> >>>
> >> >>> So one minor issue and one big issue with that solution:
> >> >>> 1. minor issue, I now get these: SAWarning: relationship ''
> will copy column to column , which conflicts with relationship(s):
> '
> >> >>> 2. major issue, I use raiseload("*") and now I start seeing:
> sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' is not
> available due to lazy='raise'
> >> >>>
> >> >>> On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer
> wrote:
> >> >>>>
> >> >>>> On Tue, Oct 9, 2018 at 6:45 PM Alex Rothberg 
> wrote:
> >> >>>> >
> >> >>>> > Okay with some small tweaks to your original code, I am able to
> show the issue I am having. comment out flush to see issue:
> >> >>>>
> >> >>>> so what you're doing here is making Employee dependent on
> FundTitle,
> >> >>>> which makes this a little out of the ordinary but this is fine.
>  You
> >> >>>> need to give the ORM a clue that this dependency exists, since it
> >> >>>> never looks at foreign key constraints unless you tell it to.
> >> >>>> Adding a relationship to FundTitle that doesn't have viewonly=True
> is
> >> >>>> an easy way to do this, there's no need to ever make use of the
> >> >>>> relationship otherwise:
> >> >>>>
> >> >>>> class Employee(Base):
> >> >>>> __tablename__ = 'employee'
> >> >>>>
> >> >>>> # ...
> >> >>>>

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
Not just for warning. Also this raise load issue. yes, i see that I can't 
mark none. If I could though, that would be awesome since I think it would 
solve this problem? I can test by setting one foreign and seeing if that 
works.

On Wednesday, October 10, 2018 at 1:13:32 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Oct 10, 2018 at 12:56 PM Alex Rothberg  > wrote: 
> > 
> > let me get that. in the meantime, what are your thoughts on just 
> removing the view only from the original relationship and then using an 
> explicit primary join where none of the columns are marked foreign? 
> Theoretically that should solve this problem, no? 
>
> is this just for the warning?I don't think the relationship() can 
> be set up with no columns marked as foreign, it takes that as a cue 
> that it should figure out the "foreign" columns on its own. 
>
> There's another way to make sure Employee is always dependent on 
> FundTitle but it's a little bit off-label. Add the dependency you 
> want directly into the unit of work: 
>
> from sqlalchemy.orm import unitofwork 
> from sqlalchemy import event 
>
>
> @event.listens_for(Session, "before_flush") 
> def _add_dep(session, context, objects): 
> context.dependencies.update([ 
> ( 
> unitofwork.SaveUpdateAll(context, inspect(FundTitle)), 
> unitofwork.SaveUpdateAll(context, inspect(Employee)) 
> ) 
> ]) 
>
>
>
>
>
> > 
> > On Wednesday, October 10, 2018 at 12:41:25 PM UTC-4, Alex Rothberg 
> wrote: 
> >> 
> >> Is it possible to specific a non viewonly relationship in which I have 
> a primary join specified in which none of the fk's are marked "foreign"? ie 
> where I can mark the relationship dependancy but it wont set any columns? 
> It looks like there may be some logic in sqla that assume all columns are 
> fk if none are specified as foreign? 
> >> 
> >> On Wednesday, October 10, 2018 at 11:56:49 AM UTC-4, Alex Rothberg 
> wrote: 
> >>> 
> >>> So one minor issue and one big issue with that solution: 
> >>> 1. minor issue, I now get these: SAWarning: relationship '' will 
> copy column to column , which conflicts with relationship(s): ' 
> >>> 2. major issue, I use raiseload("*") and now I start seeing: 
> sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' is not 
> available due to lazy='raise' 
> >>> 
> >>> On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer wrote: 
> >>>> 
> >>>> On Tue, Oct 9, 2018 at 6:45 PM Alex Rothberg  
> wrote: 
> >>>> > 
> >>>> > Okay with some small tweaks to your original code, I am able to 
> show the issue I am having. comment out flush to see issue: 
> >>>> 
> >>>> so what you're doing here is making Employee dependent on FundTitle, 
> >>>> which makes this a little out of the ordinary but this is fine.   You 
> >>>> need to give the ORM a clue that this dependency exists, since it 
> >>>> never looks at foreign key constraints unless you tell it to. 
> >>>> Adding a relationship to FundTitle that doesn't have viewonly=True is 
> >>>> an easy way to do this, there's no need to ever make use of the 
> >>>> relationship otherwise: 
> >>>> 
> >>>> class Employee(Base): 
> >>>> __tablename__ = 'employee' 
> >>>> 
> >>>> # ... 
> >>>> fund_title = relationship(FundTitle, viewonly=True) 
> >>>> 
> >>>> _ft_for_dependency = relationship(FundTitle) 
> >>>> 
> >>>> __table_args__ = ( 
> >>>> ForeignKeyConstraint( 
> >>>> (title_id, department_id, fund_id), 
> >>>> (FundTitle.title_id, FundTitle.department_id, 
> FundTitle.fund_id) 
> >>>> ), 
> >>>> ) 
> >>>> 
> >>>> then you can take the flush() out and there's no issue, as long as 
> >>>> you're always making sure that FundTitle object is present either in 
> >>>> the current Session or the row in the database exists. 
> >>>> 
> >>>> 
> >>>> > 
> >>>> > from sqlalchemy import * 
> >>>> > from sqlalchemy.orm import * 
> >>>> > from sqlalchemy.ext.declarative import declarative_base 
> >>>> > 
> >>>> > Base = declarative_base() 
>

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
let me get that. in the meantime, what are your thoughts on just removing 
the view only from the original relationship and then using an explicit 
primary join where none of the columns are marked foreign? Theoretically 
that should solve this problem, no?

On Wednesday, October 10, 2018 at 12:41:25 PM UTC-4, Alex Rothberg wrote:
>
> Is it possible to specific a non viewonly relationship in which I have a 
> primary join specified in which none of the fk's are marked "foreign"? ie 
> where I can mark the relationship dependancy but it wont set any columns? 
> It looks like there may be some logic in sqla that assume all columns are 
> fk if none are specified as foreign?
>
> On Wednesday, October 10, 2018 at 11:56:49 AM UTC-4, Alex Rothberg wrote:
>>
>> So one minor issue and one big issue with that solution:
>> 1. minor issue, I now get these: SAWarning: relationship '' will copy 
>> column to column , which conflicts with relationship(s): '
>> 2. major issue, I use raiseload("*") and now I start 
>> seeing: sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' 
>> is not available due to lazy='raise'
>>
>> On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer wrote:
>>>
>>> On Tue, Oct 9, 2018 at 6:45 PM Alex Rothberg  
>>> wrote: 
>>> > 
>>> > Okay with some small tweaks to your original code, I am able to show 
>>> the issue I am having. comment out flush to see issue: 
>>>
>>> so what you're doing here is making Employee dependent on FundTitle, 
>>> which makes this a little out of the ordinary but this is fine.   You 
>>> need to give the ORM a clue that this dependency exists, since it 
>>> never looks at foreign key constraints unless you tell it to. 
>>> Adding a relationship to FundTitle that doesn't have viewonly=True is 
>>> an easy way to do this, there's no need to ever make use of the 
>>> relationship otherwise: 
>>>
>>> class Employee(Base): 
>>> __tablename__ = 'employee' 
>>>
>>> # ... 
>>> fund_title = relationship(FundTitle, viewonly=True) 
>>>
>>> _ft_for_dependency = relationship(FundTitle) 
>>>
>>> __table_args__ = ( 
>>> ForeignKeyConstraint( 
>>> (title_id, department_id, fund_id), 
>>> (FundTitle.title_id, FundTitle.department_id, 
>>> FundTitle.fund_id) 
>>> ), 
>>> ) 
>>>
>>> then you can take the flush() out and there's no issue, as long as 
>>> you're always making sure that FundTitle object is present either in 
>>> the current Session or the row in the database exists. 
>>>
>>>
>>> > 
>>> > from sqlalchemy import * 
>>> > from sqlalchemy.orm import * 
>>> > from sqlalchemy.ext.declarative import declarative_base 
>>> > 
>>> > Base = declarative_base() 
>>> > 
>>> > 
>>> > class Title(Base): 
>>> > __tablename__ = 'title' 
>>> > id = Column(Integer, primary_key=True) 
>>> > department_id = Column(ForeignKey('department.id'), 
>>> nullable=False) 
>>> > 
>>> > department = relationship(lambda: Department) 
>>> > 
>>> > 
>>> > class Department(Base): 
>>> > __tablename__ = 'department' 
>>> > id = Column(Integer, primary_key=True) 
>>> > 
>>> > 
>>> > class Fund(Base): 
>>> > __tablename__ = 'fund' 
>>> > id = Column(Integer, primary_key=True) 
>>> > title_id = Column(ForeignKey('title.id'), nullable=False) 
>>> > department_id = Column(ForeignKey('department.id'), 
>>> nullable=False) 
>>> > department = relationship("Department") 
>>> > title = relationship("Title") 
>>> > 
>>> > 
>>> > class FundTitle(Base): 
>>> > __tablename__ = 'fund_title' 
>>> > id = Column(Integer, primary_key=True) 
>>> > title_id = Column(ForeignKey('title.id'), nullable=False) 
>>> > department_id = Column(ForeignKey('department.id'), 
>>> nullable=False) 
>>> > fund_id = Column(ForeignKey('fund.id'), nullable=False) 
>>> > department = relationship("Department") 
>>> > title = relationship("Title") 
>>> > fund = relationship("Fund") 
>>> > 

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
Is it possible to specific a non viewonly relationship in which I have a 
primary join specified in which none of the fk's are marked "foreign"? ie 
where I can mark the relationship dependancy but it wont set any columns? 
It looks like there may be some logic in sqla that assume all columns are 
fk if none are specified as foreign?

On Wednesday, October 10, 2018 at 11:56:49 AM UTC-4, Alex Rothberg wrote:
>
> So one minor issue and one big issue with that solution:
> 1. minor issue, I now get these: SAWarning: relationship '' will copy 
> column to column , which conflicts with relationship(s): '
> 2. major issue, I use raiseload("*") and now I start 
> seeing: sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' 
> is not available due to lazy='raise'
>
> On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer wrote:
>>
>> On Tue, Oct 9, 2018 at 6:45 PM Alex Rothberg  wrote: 
>> > 
>> > Okay with some small tweaks to your original code, I am able to show 
>> the issue I am having. comment out flush to see issue: 
>>
>> so what you're doing here is making Employee dependent on FundTitle, 
>> which makes this a little out of the ordinary but this is fine.   You 
>> need to give the ORM a clue that this dependency exists, since it 
>> never looks at foreign key constraints unless you tell it to. 
>> Adding a relationship to FundTitle that doesn't have viewonly=True is 
>> an easy way to do this, there's no need to ever make use of the 
>> relationship otherwise: 
>>
>> class Employee(Base): 
>> __tablename__ = 'employee' 
>>
>> # ... 
>> fund_title = relationship(FundTitle, viewonly=True) 
>>
>> _ft_for_dependency = relationship(FundTitle) 
>>
>> __table_args__ = ( 
>> ForeignKeyConstraint( 
>> (title_id, department_id, fund_id), 
>> (FundTitle.title_id, FundTitle.department_id, 
>> FundTitle.fund_id) 
>> ), 
>> ) 
>>
>> then you can take the flush() out and there's no issue, as long as 
>> you're always making sure that FundTitle object is present either in 
>> the current Session or the row in the database exists. 
>>
>>
>> > 
>> > from sqlalchemy import * 
>> > from sqlalchemy.orm import * 
>> > from sqlalchemy.ext.declarative import declarative_base 
>> > 
>> > Base = declarative_base() 
>> > 
>> > 
>> > class Title(Base): 
>> > __tablename__ = 'title' 
>> > id = Column(Integer, primary_key=True) 
>> > department_id = Column(ForeignKey('department.id'), 
>> nullable=False) 
>> > 
>> > department = relationship(lambda: Department) 
>> > 
>> > 
>> > class Department(Base): 
>> > __tablename__ = 'department' 
>> > id = Column(Integer, primary_key=True) 
>> > 
>> > 
>> > class Fund(Base): 
>> > __tablename__ = 'fund' 
>> > id = Column(Integer, primary_key=True) 
>> > title_id = Column(ForeignKey('title.id'), nullable=False) 
>> > department_id = Column(ForeignKey('department.id'), 
>> nullable=False) 
>> > department = relationship("Department") 
>> > title = relationship("Title") 
>> > 
>> > 
>> > class FundTitle(Base): 
>> > __tablename__ = 'fund_title' 
>> > id = Column(Integer, primary_key=True) 
>> > title_id = Column(ForeignKey('title.id'), nullable=False) 
>> > department_id = Column(ForeignKey('department.id'), 
>> nullable=False) 
>> > fund_id = Column(ForeignKey('fund.id'), nullable=False) 
>> > department = relationship("Department") 
>> > title = relationship("Title") 
>> > fund = relationship("Fund") 
>> > 
>> > __table_args__ = ( 
>> > UniqueConstraint( 
>> > title_id, department_id, fund_id 
>> > ), 
>> > ) 
>> > 
>> > 
>> > class Employee(Base): 
>> > __tablename__ = 'employee' 
>> > id = Column(Integer, primary_key=True) 
>> > title_id = Column(ForeignKey('title.id'), nullable=False) 
>> > department_id = Column(ForeignKey('department.id'), 
>> nullable=False) 
>> > fund_id = Column(ForeignKey('fund.id'), nullable=False) 
>> > 
>> > department = relationship(lambda: Department) 
>> > title = relationship("Title&q

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-10 Thread Alex Rothberg
So one minor issue and one big issue with that solution:
1. minor issue, I now get these: SAWarning: relationship '' will copy 
column to column , which conflicts with relationship(s): '
2. major issue, I use raiseload("*") and now I start 
seeing: sqlalchemy.exc.InvalidRequestError: 'Employee._ft_for_dependency' 
is not available due to lazy='raise'

On Wednesday, October 10, 2018 at 9:57:55 AM UTC-4, Mike Bayer wrote:
>
> On Tue, Oct 9, 2018 at 6:45 PM Alex Rothberg  > wrote: 
> > 
> > Okay with some small tweaks to your original code, I am able to show the 
> issue I am having. comment out flush to see issue: 
>
> so what you're doing here is making Employee dependent on FundTitle, 
> which makes this a little out of the ordinary but this is fine.   You 
> need to give the ORM a clue that this dependency exists, since it 
> never looks at foreign key constraints unless you tell it to. 
> Adding a relationship to FundTitle that doesn't have viewonly=True is 
> an easy way to do this, there's no need to ever make use of the 
> relationship otherwise: 
>
> class Employee(Base): 
> __tablename__ = 'employee' 
>
> # ... 
> fund_title = relationship(FundTitle, viewonly=True) 
>
> _ft_for_dependency = relationship(FundTitle) 
>
> __table_args__ = ( 
> ForeignKeyConstraint( 
> (title_id, department_id, fund_id), 
> (FundTitle.title_id, FundTitle.department_id, 
> FundTitle.fund_id) 
> ), 
> ) 
>
> then you can take the flush() out and there's no issue, as long as 
> you're always making sure that FundTitle object is present either in 
> the current Session or the row in the database exists. 
>
>
> > 
> > from sqlalchemy import * 
> > from sqlalchemy.orm import * 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > Base = declarative_base() 
> > 
> > 
> > class Title(Base): 
> > __tablename__ = 'title' 
> > id = Column(Integer, primary_key=True) 
> > department_id = Column(ForeignKey('department.id'), nullable=False) 
> > 
> > department = relationship(lambda: Department) 
> > 
> > 
> > class Department(Base): 
> > __tablename__ = 'department' 
> > id = Column(Integer, primary_key=True) 
> > 
> > 
> > class Fund(Base): 
> > __tablename__ = 'fund' 
> > id = Column(Integer, primary_key=True) 
> > title_id = Column(ForeignKey('title.id'), nullable=False) 
> > department_id = Column(ForeignKey('department.id'), nullable=False) 
> > department = relationship("Department") 
> > title = relationship("Title") 
> > 
> > 
> > class FundTitle(Base): 
> > __tablename__ = 'fund_title' 
> > id = Column(Integer, primary_key=True) 
> > title_id = Column(ForeignKey('title.id'), nullable=False) 
> > department_id = Column(ForeignKey('department.id'), nullable=False) 
> > fund_id = Column(ForeignKey('fund.id'), nullable=False) 
> > department = relationship("Department") 
> > title = relationship("Title") 
> > fund = relationship("Fund") 
> > 
> > __table_args__ = ( 
> > UniqueConstraint( 
> > title_id, department_id, fund_id 
> > ), 
> > ) 
> > 
> > 
> > class Employee(Base): 
> > __tablename__ = 'employee' 
> > id = Column(Integer, primary_key=True) 
> > title_id = Column(ForeignKey('title.id'), nullable=False) 
> > department_id = Column(ForeignKey('department.id'), nullable=False) 
> > fund_id = Column(ForeignKey('fund.id'), nullable=False) 
> > 
> > department = relationship(lambda: Department) 
> > title = relationship("Title") 
> > fund = relationship("Fund") 
> > 
> > fund_title = relationship(FundTitle, viewonly=True) 
> > 
> > 
> > __table_args__ = ( 
> > ForeignKeyConstraint( 
> > (title_id, department_id, fund_id), (FundTitle.title_id, 
> FundTitle.department_id, FundTitle.fund_id) 
> > ), 
> > ) 
> > 
> > 
> > e = create_engine("postgresql://localhost/test_issue", echo=False) 
> > 
> > # Base.metadata.drop_all(e) 
> > Base.metadata.create_all(e) 
> > 
> > s = Session(e) 
> > # s.rollback() 
> > 
> > while True: 
> > d1 = Department() 
> > t1 = Title(department=d1) 
> > f1 = Fund(department=d1, title=t1) 
> > ft1 = FundTitle(title=t1, d

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-09 Thread Alex Rothberg
Okay with some small tweaks to your original code, I am able to show the 
issue I am having. comment out flush to see issue:

from sqlalchemy import * 
from sqlalchemy.orm import * 
from sqlalchemy.ext.declarative import declarative_base 

Base = declarative_base() 


class Title(Base): 
__tablename__ = 'title' 
id = Column(Integer, primary_key=True) 
department_id = Column(ForeignKey('department.id'), nullable=False) 

department = relationship(lambda: Department) 


class Department(Base): 
__tablename__ = 'department' 
id = Column(Integer, primary_key=True) 


class Fund(Base): 
__tablename__ = 'fund' 
id = Column(Integer, primary_key=True) 
title_id = Column(ForeignKey('title.id'), nullable=False) 
department_id = Column(ForeignKey('department.id'), nullable=False) 
department = relationship("Department") 
title = relationship("Title") 


class FundTitle(Base): 
__tablename__ = 'fund_title' 
id = Column(Integer, primary_key=True) 
title_id = Column(ForeignKey('title.id'), nullable=False) 
department_id = Column(ForeignKey('department.id'), nullable=False) 
fund_id = Column(ForeignKey('fund.id'), nullable=False) 
department = relationship("Department") 
title = relationship("Title") 
fund = relationship("Fund") 

__table_args__ = (
UniqueConstraint(
title_id, department_id, fund_id
),
)   


class Employee(Base): 
__tablename__ = 'employee' 
id = Column(Integer, primary_key=True) 
title_id = Column(ForeignKey('title.id'), nullable=False) 
department_id = Column(ForeignKey('department.id'), nullable=False) 
fund_id = Column(ForeignKey('fund.id'), nullable=False) 

department = relationship(lambda: Department) 
title = relationship("Title") 
fund = relationship("Fund") 

fund_title = relationship(FundTitle, viewonly=True) 


__table_args__ = (
ForeignKeyConstraint(
(title_id, department_id, fund_id), (FundTitle.title_id, 
FundTitle.department_id, FundTitle.fund_id)
),
)


e = create_engine("postgresql://localhost/test_issue", echo=False) 

# Base.metadata.drop_all(e) 
Base.metadata.create_all(e) 

s = Session(e) 
# s.rollback()

while True:
d1 = Department() 
t1 = Title(department=d1) 
f1 = Fund(department=d1, title=t1) 
ft1 = FundTitle(title=t1, department=d1, fund=f1) 

s.add_all([d1, t1, f1,  ft1]) 

s.flush()

e1 = Employee(title=t1, department=d1, fund=f1) 

s.add_all([e1,]) 
s.commit() 

On Tuesday, October 9, 2018 at 12:20:30 PM UTC-4, Mike Bayer wrote:
>
> On Tue, Oct 9, 2018 at 10:44 AM Alex Rothberg  > wrote: 
> > 
> > In looking at what you wrote doesn't this cause an fk violation (it does 
> for me): 
> > 2018-10-08 10:18:38,760 INFO sqlalchemy.engine.base.Engine INSERT INTO 
> employee (title_id, department_id, fund_id) VALUES (%(title_id)s, 
> %(department_id)s, %(fund_id)s) RETURNING employee.id 
> > 2018-10-08 10:18:38,763 INFO sqlalchemy.engine.base.Engine INSERT INTO 
> fund_title (title_id, department_id, fund_id) VALUES (%(title_id)s, 
> %(department_id)s, %(fund_id)s) RETURNING fund_title.id 
> > 
> > in that a a (non deferred) fk is violated between employee and 
> fund_title ? 
>
> see we need to see how youve laid out your ForeignKeyConstraints, if 
> they are composite and overlapping, there are additional options that 
> may be needed (specifically the post_update flag).  you'll note I laid 
> out all FKs as single column. 
>
> > 
> > On Mon, Oct 8, 2018 at 10:20 AM Mike Bayer  > wrote: 
> >> 
> >> On Sun, Oct 7, 2018 at 7:11 PM Alex Rothberg  > wrote: 
> >> > 
> >> > Okay so I investigated / thought about this further. The issue is 
> that while I do have a relationship between the various models, some of the 
> relationships are viewonly since I have overlapping fks. 
> >> > 
> >> > For example I have a model Employee, which has fks: department_id, 
> title_id, and fund_id. The related models are Department (fk 
> department_id), Title (fk department_id and title_id) , Fund (fk fund_id) 
> and FundTitle (fk department_id, title_id and fund_id). I have set 
> FundTitle to viewonly. When updating / creating an Employee, I do create 
> and add a new FundTitle to the session, however I don't assign it to the 
> employee as the relationship is viewonly. If I don't flush before making 
> the assignment, the final flush / commit attempts to update / create the 
> employee before creating the FundTitle. 
> >> 
> >> let's work with source code that is runnable (e.g. MCVE).   Below is 
> >> the model that i

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-09 Thread Alex Rothberg
 I should say, I didn't run your exact code but essentially that ordering
is what is causing my issues with my code in that the new fund_title is
inserted after the new employee.

On Tue, Oct 9, 2018 at 10:44 AM Alex Rothberg  wrote:

> In looking at what you wrote doesn't this cause an fk violation (it does
> for me):
> 2018-10-08 10:18:38,760 INFO sqlalchemy.engine.base.Engine INSERT INTO
> employee (title_id, department_id, fund_id) VALUES (%(title_id)s,
> %(department_id)s, %(fund_id)s) RETURNING employee.id
> 2018-10-08 10:18:38,763 INFO sqlalchemy.engine.base.Engine INSERT INTO
> fund_title (title_id, department_id, fund_id) VALUES (%(title_id)s,
> %(department_id)s, %(fund_id)s) RETURNING fund_title.id
>
> in that a a (non deferred) fk is violated between employee and fund_title ?
>
> On Mon, Oct 8, 2018 at 10:20 AM Mike Bayer 
> wrote:
>
>> On Sun, Oct 7, 2018 at 7:11 PM Alex Rothberg 
>> wrote:
>> >
>> > Okay so I investigated / thought about this further. The issue is that
>> while I do have a relationship between the various models, some of the
>> relationships are viewonly since I have overlapping fks.
>> >
>> > For example I have a model Employee, which has fks: department_id,
>> title_id, and fund_id. The related models are Department (fk
>> department_id), Title (fk department_id and title_id) , Fund (fk fund_id)
>> and FundTitle (fk department_id, title_id and fund_id). I have set
>> FundTitle to viewonly. When updating / creating an Employee, I do create
>> and add a new FundTitle to the session, however I don't assign it to the
>> employee as the relationship is viewonly. If I don't flush before making
>> the assignment, the final flush / commit attempts to update / create the
>> employee before creating the FundTitle.
>>
>> let's work with source code that is runnable (e.g. MCVE).   Below is
>> the model that it seems you are describing, and then there's a
>> demonstration of assembly of all those components using relationships,
>> a single flush and it all goes in in the correct order, all FKs are
>> nullable=False.
>>
>> from sqlalchemy import *
>> from sqlalchemy.orm import *
>> from sqlalchemy.ext.declarative import declarative_base
>>
>> Base = declarative_base()
>>
>>
>> class Employee(Base):
>> __tablename__ = 'employee'
>> id = Column(Integer, primary_key=True)
>> title_id = Column(ForeignKey('title.id'), nullable=False)
>> department_id = Column(ForeignKey('department.id'), nullable=False)
>> fund_id = Column(ForeignKey('fund.id'), nullable=False)
>> department = relationship("Department")
>> title = relationship("Title")
>> fund = relationship("Fund")
>>
>>
>> class Title(Base):
>> __tablename__ = 'title'
>> id = Column(Integer, primary_key=True)
>> department_id = Column(ForeignKey('department.id'), nullable=False)
>> department = relationship("Department")
>>
>>
>> class Department(Base):
>> __tablename__ = 'department'
>> id = Column(Integer, primary_key=True)
>>
>>
>> class Fund(Base):
>> __tablename__ = 'fund'
>> id = Column(Integer, primary_key=True)
>> title_id = Column(ForeignKey('title.id'), nullable=False)
>> department_id = Column(ForeignKey('department.id'), nullable=False)
>> department = relationship("Department")
>> title = relationship("Title")
>>
>>
>> class FundTitle(Base):
>> __tablename__ = 'fund_title'
>> id = Column(Integer, primary_key=True)
>> title_id = Column(ForeignKey('title.id'), nullable=False)
>> department_id = Column(ForeignKey('department.id'), nullable=False)
>> fund_id = Column(ForeignKey('fund.id'), nullable=False)
>> department = relationship("Department")
>> title = relationship("Title")
>> fund = relationship("Fund")
>>
>> e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
>> Base.metadata.create_all(e)
>>
>> s = Session(e)
>>
>> d1 = Department()
>> t1 = Title(department=d1)
>> f1 = Fund(department=d1, title=t1)
>> ft1 = FundTitle(title=t1, department=d1, fund=f1)
>> e1 = Employee(title=t1, department=d1, fund=f1)
>>
>> s.add_all([d1, t1, f1, ft1, e1])
>> s.commit()
>>
>>
>> the INSERTs can be ordered naturally here and the unit of work will do
>> that for you if you use relationship:
>>
>> BEGIN (implicit)
>> 2

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-09 Thread Alex Rothberg
In looking at what you wrote doesn't this cause an fk violation (it does
for me):
2018-10-08 10:18:38,760 INFO sqlalchemy.engine.base.Engine INSERT INTO
employee (title_id, department_id, fund_id) VALUES (%(title_id)s,
%(department_id)s, %(fund_id)s) RETURNING employee.id
2018-10-08 10:18:38,763 INFO sqlalchemy.engine.base.Engine INSERT INTO
fund_title (title_id, department_id, fund_id) VALUES (%(title_id)s,
%(department_id)s, %(fund_id)s) RETURNING fund_title.id

in that a a (non deferred) fk is violated between employee and fund_title ?

On Mon, Oct 8, 2018 at 10:20 AM Mike Bayer  wrote:

> On Sun, Oct 7, 2018 at 7:11 PM Alex Rothberg  wrote:
> >
> > Okay so I investigated / thought about this further. The issue is that
> while I do have a relationship between the various models, some of the
> relationships are viewonly since I have overlapping fks.
> >
> > For example I have a model Employee, which has fks: department_id,
> title_id, and fund_id. The related models are Department (fk
> department_id), Title (fk department_id and title_id) , Fund (fk fund_id)
> and FundTitle (fk department_id, title_id and fund_id). I have set
> FundTitle to viewonly. When updating / creating an Employee, I do create
> and add a new FundTitle to the session, however I don't assign it to the
> employee as the relationship is viewonly. If I don't flush before making
> the assignment, the final flush / commit attempts to update / create the
> employee before creating the FundTitle.
>
> let's work with source code that is runnable (e.g. MCVE).   Below is
> the model that it seems you are describing, and then there's a
> demonstration of assembly of all those components using relationships,
> a single flush and it all goes in in the correct order, all FKs are
> nullable=False.
>
> from sqlalchemy import *
> from sqlalchemy.orm import *
> from sqlalchemy.ext.declarative import declarative_base
>
> Base = declarative_base()
>
>
> class Employee(Base):
> __tablename__ = 'employee'
> id = Column(Integer, primary_key=True)
> title_id = Column(ForeignKey('title.id'), nullable=False)
> department_id = Column(ForeignKey('department.id'), nullable=False)
> fund_id = Column(ForeignKey('fund.id'), nullable=False)
> department = relationship("Department")
> title = relationship("Title")
> fund = relationship("Fund")
>
>
> class Title(Base):
> __tablename__ = 'title'
> id = Column(Integer, primary_key=True)
> department_id = Column(ForeignKey('department.id'), nullable=False)
> department = relationship("Department")
>
>
> class Department(Base):
> __tablename__ = 'department'
> id = Column(Integer, primary_key=True)
>
>
> class Fund(Base):
> __tablename__ = 'fund'
> id = Column(Integer, primary_key=True)
> title_id = Column(ForeignKey('title.id'), nullable=False)
> department_id = Column(ForeignKey('department.id'), nullable=False)
> department = relationship("Department")
> title = relationship("Title")
>
>
> class FundTitle(Base):
> __tablename__ = 'fund_title'
> id = Column(Integer, primary_key=True)
> title_id = Column(ForeignKey('title.id'), nullable=False)
> department_id = Column(ForeignKey('department.id'), nullable=False)
> fund_id = Column(ForeignKey('fund.id'), nullable=False)
> department = relationship("Department")
> title = relationship("Title")
> fund = relationship("Fund")
>
> e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
> Base.metadata.create_all(e)
>
> s = Session(e)
>
> d1 = Department()
> t1 = Title(department=d1)
> f1 = Fund(department=d1, title=t1)
> ft1 = FundTitle(title=t1, department=d1, fund=f1)
> e1 = Employee(title=t1, department=d1, fund=f1)
>
> s.add_all([d1, t1, f1, ft1, e1])
> s.commit()
>
>
> the INSERTs can be ordered naturally here and the unit of work will do
> that for you if you use relationship:
>
> BEGIN (implicit)
> 2018-10-08 10:18:38,750 INFO sqlalchemy.engine.base.Engine INSERT INTO
> department DEFAULT VALUES RETURNING department.id
> 2018-10-08 10:18:38,750 INFO sqlalchemy.engine.base.Engine {}
> 2018-10-08 10:18:38,753 INFO sqlalchemy.engine.base.Engine INSERT INTO
> title (department_id) VALUES (%(department_id)s) RETURNING title.id
> 2018-10-08 10:18:38,753 INFO sqlalchemy.engine.base.Engine
> {'department_id': 1}
> 2018-10-08 10:18:38,757 INFO sqlalchemy.engine.base.Engine INSERT INTO
> fund (title_id, department_id) VALUES (%(title_id)s,
> %(department_id)s) RETURNING fund.id
> 2018-10-08 10:18:38,757 INFO sqlalchemy.engine.base.Engine

Re: [sqlalchemy] Controlling table dependency for flushing

2018-10-07 Thread Alex Rothberg
Okay so I investigated / thought about this further. The issue is that 
while I do have a relationship between the various models, some of the 
relationships are viewonly since I have overlapping fks.

For example I have a model Employee, which has fks: department_id, 
title_id, and fund_id. The related models are Department (fk 
department_id), Title (fk department_id and title_id) , Fund (fk fund_id) 
and FundTitle (fk department_id, title_id and fund_id). I have set 
FundTitle to viewonly. When updating / creating an Employee, I do create 
and add a new FundTitle to the session, however I don't assign it to the 
employee as the relationship is viewonly. If I don't flush before making 
the assignment, the final flush / commit attempts to update / create the 
employee before creating the FundTitle.

On Tuesday, September 18, 2018 at 9:02:30 AM UTC-4, Mike Bayer wrote:
>
> if there are no dependencies between two particular objects of 
> different classes, say A and B, then there is no deterministic 
> ordering between them.   For objects of the same class, they are 
> inserted in the order in which they were added to the Session. 
>
> the correct way to solve this problem in SQLAlchemy is to use 
> relationship() fully.  I know you've stated that these objects have a 
> relationship() between them but you have to actually use it, that is: 
>
> obj_a = A() 
> obj_b = B() 
>
> obj_a.some_relationship = obj_b   # will definitely flush correctly 
> unless there is a bug 
>
> OTOH if you are only using foreign key attributes, the ORM does *not* 
> have any idea in how it should be flushing these: 
>
> obj_a = A() 
> obj_b = B() 
>
> obj_a.some_fk = obj_b.some_id# ORM doesn't care about this, no 
> ordering is implied 
>
>
> since you said you're not setting any IDs, I'm not sure how you could 
> be doing the above. 
>
>
>
>
>
>
> On Tue, Sep 18, 2018 at 5:53 AM Simon King  > wrote: 
> > 
> > It's not something I've ever looked into, but I'm not aware of any 
> > debugging options here, no. You'd probably want to start by scattering 
> > print statements around the UOWTransaction class 
> > (
> https://bitbucket.org/zzzeek/sqlalchemy/src/c94d67892e68ac317d72eb202cca427084b3ca74/lib/sqlalchemy/orm/unitofwork.py?at=master=file-view-default#unitofwork.py-111)
>  
>
> > 
> > Looking at that code made me wonder whether you've set any particular 
> > cascade options on your relationship; I'm not sure if cascade options 
> > affect the dependency calculation. 
> > 
> > Simon 
> > 
> > On Tue, Sep 18, 2018 at 5:28 AM Alex Rothberg  > wrote: 
> > > 
> > > In order to guide me in stripping down this code to produce an example 
> for positing, are there any options / flags / introspections I can turn on 
> to understand how sql makes decisions about the order in which is writes 
> statements to the DB? 
> > > 
> > > On Friday, September 14, 2018 at 10:13:45 AM UTC-4, Simon King wrote: 
> > >> 
> > >> In that case can you show us the code that is causing the problem? 
> > >> On Fri, Sep 14, 2018 at 2:55 PM Alex Rothberg  
> wrote: 
> > >> > 
> > >> > I am not generating any IDs myself and I already have relationships 
> between the models. 
> > >> > 
> > >> > On Friday, September 14, 2018 at 4:33:08 AM UTC-4, Simon King 
> wrote: 
> > >> >> 
> > >> >> On Thu, Sep 13, 2018 at 10:50 PM Alex Rothberg  
> wrote: 
> > >> >> > 
> > >> >> > Is it possible to hint at sqla the order in which it should 
> write out changes to the DB? 
> > >> >> > 
> > >> >> > I am having issues in which I add two new objects to a session, 
> a and b where a depends on b, but sqla is flushing a before b leading to an 
> fk issue. I can solve this a few ways: explicitly calling flush after 
> adding b, or changing the fk constraint to be initially deferred. Ideally I 
> would not have to do either of these. 
> > >> >> > 
> > >> >> 
> > >> >> If you have configured a relationship between the two classes 
> > >> >> (
> http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#building-a-relationship),
>  
>
> > >> >> and you've linked the objects together using that relationship 
> (a.b = 
> > >> >> b), then SQLAlchemy will flush them in the correct order. If you 
> are 
> > >> >> generating your IDs in Python and assigning them to the primary 
> and 
> > >> >> foreign key columns directly, SQLAlchemy pr

Re: Migrating PEP-435 Enums

2018-09-24 Thread Alex Rothberg
This seems to work / provide a good template of how to get that 
info: https://github.com/dw/alembic-autogenerate-enums

On Monday, September 24, 2018 at 5:19:39 PM UTC-4, Alex Rothberg wrote:
>
> and is there an easy way to progrmatically get the name of the enum from 
> the model field (given I declared it as as above)?
>
> On Monday, September 24, 2018 at 5:16:44 PM UTC-4, Mike Bayer wrote:
>>
>> you don't gain much since it only works on Postgresql anyway.Also, 
>> the syntax you suggested wouldn't work, because Postgresql needs to 
>> know the name of the enumeration. 
>>
>> This is part of why all the "enum" issues for alembic are just open. 
>>   The way PG does it vs. MySQL are immensely different, and then none 
>> of the other databases have an ENUM type.Your request for an 
>> "op.alter_column()" directive is basically asking for those issues to 
>> be done.   I'm on a long term search for code contributors who can 
>> work on that stuff, ENUM is going to be very hard to work front to 
>> back in all cases. 
>>
>>
>>
>>
>>
>>
>> On Mon, Sep 24, 2018 at 2:49 PM Alex Rothberg  
>> wrote: 
>> > 
>> > is there no way to get this alter statement without writing raw sql? 
>> > e.g. something like: op.alter_column("my_table", "my_column", 
>> existing_type=ENUM(...), type_=ENUM()) ? 
>> > 
>> > On Monday, September 24, 2018 at 2:36:52 PM UTC-4, Mike Bayer wrote: 
>> >> 
>> >> Postgresql ENUMs are entirely different from any other database so it 
>> >> matters a lot.  For PG, you'd want to be doing op.execute("ALTER TYPE 
>> >> myenum ..."), full syntax is at 
>> >> https://www.postgresql.org/docs/9.1/static/sql-altertype.html 
>> >> On Mon, Sep 24, 2018 at 12:45 PM Alex Rothberg  
>> wrote: 
>> >> > 
>> >> > Assuming that I am using the PEP-435 enum feature in SQLA, e.g.: 
>> >> > class InvitationReason(str, enum.Enum): 
>> >> > ORIGINAL_ADMIN = "ORIGINAL_ADMIN" 
>> >> > FIRM_USER = "FIRM_USER" 
>> >> > ... 
>> >> > 
>> >> > reason = db.Column(db.Enum(InvitationReason), nullable=False) 
>> >> > 
>> >> > and I want to add / change the values in the enum. I know that 
>> alembic won't auto generate the migration. Given that, what is the simplest 
>> way to specify the migration by hand? I am using postgres, if that matters. 
>> >> > 
>> >> > -- 
>> >> > You received this message because you are subscribed to the Google 
>> Groups "sqlalchemy-alembic" group. 
>> >> > To unsubscribe from this group and stop receiving emails from it, 
>> send an email to sqlalchemy-alembic+unsubscr...@googlegroups.com. 
>> >> > For more options, visit https://groups.google.com/d/optout. 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups "sqlalchemy-alembic" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to sqlalchemy-alembic+unsubscr...@googlegroups.com. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Migrating PEP-435 Enums

2018-09-24 Thread Alex Rothberg
and is there an easy way to progrmatically get the name of the enum from 
the model field (given I declared it as as above)?

On Monday, September 24, 2018 at 5:16:44 PM UTC-4, Mike Bayer wrote:
>
> you don't gain much since it only works on Postgresql anyway.Also, 
> the syntax you suggested wouldn't work, because Postgresql needs to 
> know the name of the enumeration. 
>
> This is part of why all the "enum" issues for alembic are just open. 
>   The way PG does it vs. MySQL are immensely different, and then none 
> of the other databases have an ENUM type.Your request for an 
> "op.alter_column()" directive is basically asking for those issues to 
> be done.   I'm on a long term search for code contributors who can 
> work on that stuff, ENUM is going to be very hard to work front to 
> back in all cases. 
>
>
>
>
>
>
> On Mon, Sep 24, 2018 at 2:49 PM Alex Rothberg  > wrote: 
> > 
> > is there no way to get this alter statement without writing raw sql? 
> > e.g. something like: op.alter_column("my_table", "my_column", 
> existing_type=ENUM(...), type_=ENUM()) ? 
> > 
> > On Monday, September 24, 2018 at 2:36:52 PM UTC-4, Mike Bayer wrote: 
> >> 
> >> Postgresql ENUMs are entirely different from any other database so it 
> >> matters a lot.  For PG, you'd want to be doing op.execute("ALTER TYPE 
> >> myenum ..."), full syntax is at 
> >> https://www.postgresql.org/docs/9.1/static/sql-altertype.html 
> >> On Mon, Sep 24, 2018 at 12:45 PM Alex Rothberg  
> wrote: 
> >> > 
> >> > Assuming that I am using the PEP-435 enum feature in SQLA, e.g.: 
> >> > class InvitationReason(str, enum.Enum): 
> >> > ORIGINAL_ADMIN = "ORIGINAL_ADMIN" 
> >> > FIRM_USER = "FIRM_USER" 
> >> > ... 
> >> > 
> >> > reason = db.Column(db.Enum(InvitationReason), nullable=False) 
> >> > 
> >> > and I want to add / change the values in the enum. I know that 
> alembic won't auto generate the migration. Given that, what is the simplest 
> way to specify the migration by hand? I am using postgres, if that matters. 
> >> > 
> >> > -- 
> >> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy-alembic" group. 
> >> > To unsubscribe from this group and stop receiving emails from it, 
> send an email to sqlalchemy-alembic+unsubscr...@googlegroups.com 
> . 
> >> > For more options, visit https://groups.google.com/d/optout. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy-alembic" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy-alembic+unsubscr...@googlegroups.com . 
>
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Migrating PEP-435 Enums

2018-09-24 Thread Alex Rothberg
Assuming that I am using the PEP-435 enum feature in SQLA, e.g.:
class InvitationReason(str, enum.Enum):
ORIGINAL_ADMIN = "ORIGINAL_ADMIN"
FIRM_USER = "FIRM_USER"
...

reason = db.Column(db.Enum(InvitationReason), nullable=False)

and I want to add / change the values in the enum. I know that alembic 
won't auto generate the migration. Given that, what is the simplest way to 
specify the migration by hand? I am using postgres, if that matters.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Setting join_depth on Query

2018-09-20 Thread Alex Rothberg
Is there anyway to set the join_depth on the options, or do I just have to 
write that myself with a for loop?

On Thursday, September 20, 2018 at 8:50:59 AM UTC-4, Mike Bayer wrote:
>
> On Wed, Sep 19, 2018 at 5:26 PM Alex Rothberg  > wrote: 
> > 
> > Following up on 
> https://groups.google.com/forum/#!searchin/sqlalchemy/join_depth%7Csort:date/sqlalchemy/WstKKbEFaRo/hL910npaBQAJ
>  
> and 
> https://stackoverflow.com/questions/4381712/how-do-you-dynamically-adjust-the-recursion-depth-for-eager-loading-in-the-sqlal,
>  
> is there any way to set the join_depth on the query object rather than on 
> the relationship? 
> > 
> > Right now I have: 
> > class Geography(db.Model): 
> > id = db.Column(UUID, default=uuid.uuid4, primary_key=True) 
> > name = db.Column(db.String(), nullable=False, unique=True) 
> > parent_geography_id = db.Column(UUID, db.ForeignKey(id)) 
> > children = db.relationship( 
> > lambda: Geography, 
> > lazy="joined", 
> > join_depth=3, 
> > backref=backref("parent", remote_side=[id]), 
> > ) 
> > 
> > however if I would like to customize the join_depth on the query. 
> > 
> > A related issue is that if I then take 
> Geography.query.options(raiseload("*", sql_only=True)), the join_depth 
> seems to be lost and I just get an exception. Also printing the query when 
> the options is set shows that the join_depth is not used. 
>
> a loader option supersedes join_depth entirely, because you are 
> setting it directly, e.g. 
>
> query.options(joinedload(Geography.parent).joinedload(Geography.parent)) 
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Setting join_depth on Query

2018-09-19 Thread Alex Rothberg
Following up 
on 
https://groups.google.com/forum/#!searchin/sqlalchemy/join_depth%7Csort:date/sqlalchemy/WstKKbEFaRo/hL910npaBQAJ
 
and 
https://stackoverflow.com/questions/4381712/how-do-you-dynamically-adjust-the-recursion-depth-for-eager-loading-in-the-sqlal,
 
is there any way to set the join_depth on the query object rather than on 
the relationship?

Right now I have:
class Geography(db.Model):
id = db.Column(UUID, default=uuid.uuid4, primary_key=True)
name = db.Column(db.String(), nullable=False, unique=True)
parent_geography_id = db.Column(UUID, db.ForeignKey(id))
children = db.relationship(
lambda: Geography,
lazy="joined",
join_depth=3,
backref=backref("parent", remote_side=[id]),
)

however if I would like to customize the join_depth on the query.

A related issue is that if I then take Geography.query.options(raiseload("*", 
sql_only=True)), the join_depth seems to be lost and I just get an 
exception. Also printing the query when the options is set shows that the 
join_depth is not used.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Controlling table dependency for flushing

2018-09-17 Thread Alex Rothberg
In order to guide me in stripping down this code to produce an example for 
positing, are there any options / flags / introspections I can turn on to 
understand how sql makes decisions about the order in which is writes 
statements to the DB?

On Friday, September 14, 2018 at 10:13:45 AM UTC-4, Simon King wrote:
>
> In that case can you show us the code that is causing the problem? 
> On Fri, Sep 14, 2018 at 2:55 PM Alex Rothberg  > wrote: 
> > 
> > I am not generating any IDs myself and I already have relationships 
> between the models. 
> > 
> > On Friday, September 14, 2018 at 4:33:08 AM UTC-4, Simon King wrote: 
> >> 
> >> On Thu, Sep 13, 2018 at 10:50 PM Alex Rothberg  
> wrote: 
> >> > 
> >> > Is it possible to hint at sqla the order in which it should write out 
> changes to the DB? 
> >> > 
> >> > I am having issues in which I add two new objects to a session, a and 
> b where a depends on b, but sqla is flushing a before b leading to an fk 
> issue. I can solve this a few ways: explicitly calling flush after adding 
> b, or changing the fk constraint to be initially deferred. Ideally I would 
> not have to do either of these. 
> >> > 
> >> 
> >> If you have configured a relationship between the two classes 
> >> (
> http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#building-a-relationship),
>  
>
> >> and you've linked the objects together using that relationship (a.b = 
> >> b), then SQLAlchemy will flush them in the correct order. If you are 
> >> generating your IDs in Python and assigning them to the primary and 
> >> foreign key columns directly, SQLAlchemy probably won't understand the 
> >> dependency. 
> >> 
> >> Does using a relationship fix your problem? 
> >> 
> >> Simon 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Controlling table dependency for flushing

2018-09-14 Thread Alex Rothberg
I am not generating any IDs myself and I already have relationships between 
the models.

On Friday, September 14, 2018 at 4:33:08 AM UTC-4, Simon King wrote:
>
> On Thu, Sep 13, 2018 at 10:50 PM Alex Rothberg  > wrote: 
> > 
> > Is it possible to hint at sqla the order in which it should write out 
> changes to the DB? 
> > 
> > I am having issues in which I add two new objects to a session, a and b 
> where a depends on b, but sqla is flushing a before b leading to an fk 
> issue. I can solve this a few ways: explicitly calling flush after adding 
> b, or changing the fk constraint to be initially deferred. Ideally I would 
> not have to do either of these. 
> > 
>
> If you have configured a relationship between the two classes 
> (
> http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#building-a-relationship),
>  
>
> and you've linked the objects together using that relationship (a.b = 
> b), then SQLAlchemy will flush them in the correct order. If you are 
> generating your IDs in Python and assigning them to the primary and 
> foreign key columns directly, SQLAlchemy probably won't understand the 
> dependency. 
>
> Does using a relationship fix your problem? 
>
> Simon 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Controlling table dependency for flushing

2018-09-13 Thread Alex Rothberg
Is it possible to hint at sqla the order in which it should write out 
changes to the DB?

I am having issues in which I add two new objects to a session, a and b 
where a depends on b, but sqla is flushing a before b leading to an fk 
issue. I can solve this a few ways: explicitly calling flush after adding 
b, or changing the fk constraint to be initially deferred. Ideally I would 
not have to do either of these.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ForeignKeyConstraint using Forward Declared Model

2018-09-04 Thread Alex Rothberg
I tracked down the error on my side. Looks like I have to use the table 
name rather than the model name (doh) in the string. That being said, there 
may still be a bug in sqla where it tries to read the name off a join 
(rather than a table).

That being said, any reason not to support the lambda syntax for 
ForeignKeyConstraint rather than just the string syntax?

On Tuesday, September 4, 2018 at 10:43:13 PM UTC-4, Alex Rothberg wrote:
>
> You're right the error I posted is coming from somewhere else. I am trying 
> to get a stripped down example. In the meantime, it looks like when I add 
> the additional fk constraint, model.__mapper__.get_property(property_name) 
> on a different model starts failing.
>
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/marshmallow_sqlalchemy/convert.py"
> , line 151, in field_for
> prop = model.__mapper__.get_property(property_name)
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
> , line 1923, in get_property
> configure_mappers()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
> , line 3033, in configure_mappers
> mapper._post_configure_properties()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
> , line 1832, in _post_configure_properties
> prop.init()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/interfaces.py"
> , line 183, in init
> self.do_init()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
> , line 1656, in do_init
> self._setup_join_conditions()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
> , line 1731, in _setup_join_conditions
> can_be_synced_fn=self._columns_are_mapped
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
> , line 1998, in __init__
> self._determine_joins()
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
> , line 2082, in _determine_joins
> consider_as_foreign_keys=consider_as_foreign_keys
>   File "", line 2, in join_condition
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
> , line 964, in _join_condition
> a, a_subset, b, consider_as_foreign_keys)
>   File 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
> , line 1021, in _joincond_scan_left_right
> if nrte.table_name == b.name:
> AttributeError: 'Join' object has no attribute 'name'
>
>
> On Tuesday, September 4, 2018 at 9:40:11 PM UTC-4, Mike Bayer wrote:
>>
>> On Tue, Sep 4, 2018 at 7:54 PM, Alex Rothberg  
>> wrote: 
>> > Is it possible to set up a `ForeignKeyConstraint` that uses a class not 
>> yet 
>> > declared? ie is there a way to use either the lambda or string syntax 
>> to 
>> > forward declare the fk constrains? Neither works for me. Using strings 
>> > yields: 
>> > 
>> >   File "", line 2, in join_condition 
>> >   File 
>> > 
>> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py",
>>  
>>
>> > line 964, in _join_condition 
>> > a, a_subset, b, consider_as_foreign_keys) 
>> >   File 
>> > 
>> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py",
>>  
>>
>> > line 1021, in _joincond_scan_left_right 
>> > if nrte.table_name == b.name: 
>> > AttributeError: 'Join' object has no attribute 'name' 
>> > 
>> > and I can't get the lambda form to work. 
>> > I tried: 
>> > db.ForeignKeyConstraint((employee_id, year, home_fund_id), 
>> > ('FundEmployee.employee_id', 'FundEmployee.year', 
>> 'FundEmployee.fund_id')) 
>>
>> ForeignKeyConstraint can be fully declared with just strings and the 
>> referenced table and/or declarative class doesn't need to exist yet, 
>> see 
>> http://docs.sqlalchemy.org/en/latest/core/constraints.html#metadata-foreignkeys.
>>  
>>
>>That AttributeError doesn't seem to be raised by a 
>> ForeignKeyConstraint, looks like it's coming from orm.relatiionship or 
>> something.   Feel free to provide a more complete example of what 
>> you're 

Re: [sqlalchemy] ForeignKeyConstraint using Forward Declared Model

2018-09-04 Thread Alex Rothberg
You're right the error I posted is coming from somewhere else. I am trying 
to get a stripped down example. In the meantime, it looks like when I add 
the additional fk constraint, model.__mapper__.get_property(property_name) 
on a different model starts failing.

  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/marshmallow_sqlalchemy/convert.py"
, line 151, in field_for
prop = model.__mapper__.get_property(property_name)
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
, line 1923, in get_property
configure_mappers()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
, line 3033, in configure_mappers
mapper._post_configure_properties()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py"
, line 1832, in _post_configure_properties
prop.init()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/interfaces.py"
, line 183, in init
self.do_init()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
, line 1656, in do_init
self._setup_join_conditions()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
, line 1731, in _setup_join_conditions
can_be_synced_fn=self._columns_are_mapped
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
, line 1998, in __init__
self._determine_joins()
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py"
, line 2082, in _determine_joins
consider_as_foreign_keys=consider_as_foreign_keys
  File "", line 2, in join_condition
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
, line 964, in _join_condition
a, a_subset, b, consider_as_foreign_keys)
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
, line 1021, in _joincond_scan_left_right
if nrte.table_name == b.name:
AttributeError: 'Join' object has no attribute 'name'


On Tuesday, September 4, 2018 at 9:40:11 PM UTC-4, Mike Bayer wrote:
>
> On Tue, Sep 4, 2018 at 7:54 PM, Alex Rothberg  > wrote: 
> > Is it possible to set up a `ForeignKeyConstraint` that uses a class not 
> yet 
> > declared? ie is there a way to use either the lambda or string syntax to 
> > forward declare the fk constrains? Neither works for me. Using strings 
> > yields: 
> > 
> >   File "", line 2, in join_condition 
> >   File 
> > 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py",
>  
>
> > line 964, in _join_condition 
> > a, a_subset, b, consider_as_foreign_keys) 
> >   File 
> > 
> "/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py",
>  
>
> > line 1021, in _joincond_scan_left_right 
> > if nrte.table_name == b.name: 
> > AttributeError: 'Join' object has no attribute 'name' 
> > 
> > and I can't get the lambda form to work. 
> > I tried: 
> > db.ForeignKeyConstraint((employee_id, year, home_fund_id), 
> > ('FundEmployee.employee_id', 'FundEmployee.year', 
> 'FundEmployee.fund_id')) 
>
> ForeignKeyConstraint can be fully declared with just strings and the 
> referenced table and/or declarative class doesn't need to exist yet, 
> see 
> http://docs.sqlalchemy.org/en/latest/core/constraints.html#metadata-foreignkeys.
>  
>
>That AttributeError doesn't seem to be raised by a 
> ForeignKeyConstraint, looks like it's coming from orm.relatiionship or 
> something.   Feel free to provide a more complete example of what 
> you're trying to do. 
>
>
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.

[sqlalchemy] ForeignKeyConstraint using Forward Declared Model

2018-09-04 Thread Alex Rothberg
Is it possible to set up a `ForeignKeyConstraint` that uses a class not yet 
declared? ie is there a way to use either the lambda or string syntax to 
forward declare the fk constrains? Neither works for me. Using strings 
yields:

  File "", line 2, in join_condition
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
, line 964, in _join_condition
a, a_subset, b, consider_as_foreign_keys)
  File 
"/Users/alex/.pyenv/versions/api2/lib/python3.7/site-packages/sqlalchemy/sql/selectable.py"
, line 1021, in _joincond_scan_left_right
if nrte.table_name == b.name:
AttributeError: 'Join' object has no attribute 'name'

and I can't get the lambda form to work.
I tried:
db.ForeignKeyConstraint((employee_id, year, home_fund_id), (
'FundEmployee.employee_id', 'FundEmployee.year', 'FundEmployee.fund_id'))


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Setting many to many collection with secondary relationship when having only pk to one of the models.

2018-08-23 Thread Alex Rothberg
I didn't mean to confuse this question by showing both formats of the 
many-to-many relationship (using secondary and not); I am aware that using 
both can lead to problems / inconsistencies.

I just wanted to show that I had both options available at my disposal. Is 
there anyway to use the secondary relationship (rather than the association 
one) with some combination of cascading options to get what I want where I 
can create and use a Geography knowing only its pk without the ORM trying 
to then save it to the DB?

On Thursday, August 23, 2018 at 1:02:40 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Aug 22, 2018 at 5:41 PM, Alex Rothberg  > wrote: 
> > I am using an association model / table to represent a many to many 
> > relationship: 
> > 
> > class Geography(db.Model): 
> > 
> > id = 
> > ... 
> > 
> > class Fund(db.Model): 
> > id = 
> > ... 
> > geography_associations = db.relationship( 
> > lambda: FundGeographyAssociation, 
> > back_populates="fund", 
> > cascade='save-update, merge, delete, delete-orphan' 
> > ) 
> > 
> > geographies = db.relationship( 
> > Geography, 
> > backref="fund", 
> > secondary=lambda: FundGeographyAssociation.__table__, 
> > ) 
> > 
> > class FundGeographyAssociation(db.Model): 
> > fund_id = db.Column( 
> > UUID, db.ForeignKey(Fund.id), primary_key=True, 
> > ) 
> > geography_id = db.Column( 
> > UUID, db.ForeignKey(Geography.id), primary_key=True, 
> > ) 
> > 
> > fund = db.relationship(Fund, 
> back_populates='geography_associations') 
> > 
> > 
> > and then am attempting to update the list of geographies for a Fund 
> using: 
> >fund.geographies = [] 
> > 
> > 
> > my issue is what to put in ??? when I only have the pk of the geography 
> > model. 
>
> it is not a recommended pattern to re-purpose a mapped association 
> class as a "secondary" elsewhere.  The ORM does not know that 
> Fund.geography_associations and Fund.geographies refer to the same 
> table and mutations to each of these independently will conflict (see 
>
> http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object)
>  
>
> .   The usual pattern is to use an association proxy for 
> Fund.geographies (see 
>
> http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html#simplifying-association-objects).
>  
>
>
> If you want to add a row having only the id of Geography, the most 
> straightforward approach is to append the association object directly: 
>
> fund.geography_associations = [FundGeoAssoc(geo_id=1)] 
>
>
> > 
> > this works: Geography.query.get(id) however this does not: 
> Geography(id=id) 
> > as the latter tries to create a new Geography object leading to 
> conflicts. 
> > The former seems "silly" as it requires an extra query to db to load the 
> > object even though all i need is the geography id to create the 
> association 
> > object. I tried variation of session.merge with load=False however that 
> > doesn't work as the object is transient. 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Setting many to many collection with secondary relationship when having only pk to one of the models.

2018-08-22 Thread Alex Rothberg
I am using an association model / table to represent a many to many 
relationship:

class Geography(db.Model):

id = 
...

class Fund(db.Model):
id = 
...
geography_associations = db.relationship(
lambda: FundGeographyAssociation,
back_populates="fund",
cascade='save-update, merge, delete, delete-orphan'
)

geographies = db.relationship(
Geography,
backref="fund",
secondary=lambda: FundGeographyAssociation.__table__,
)

class FundGeographyAssociation(db.Model):
fund_id = db.Column(
UUID, db.ForeignKey(Fund.id), primary_key=True,
)
geography_id = db.Column(
UUID, db.ForeignKey(Geography.id), primary_key=True,
)

fund = db.relationship(Fund, back_populates='geography_associations')


and then am attempting to update the list of geographies for a Fund using:
   fund.geographies = []


my issue is what to put in ??? when I only have the pk of the geography 
model.

this works: Geography.query.get(id) however this does not: Geography(id=id) 
as the latter tries to create a new Geography object leading to conflicts. 
The former seems "silly" as it requires an extra query to db to load the 
object even though all i need is the geography id to create the association 
object. I tried variation of session.merge with load=False however that 
doesn't work as the object is transient.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Remote side backpopulates

2018-08-21 Thread Alex Rothberg
Is there any way to declare a "remote side" backpopulates? i.e. where I 
declare a relationship on class A to appear only on class B? I would like 
the relationship only to be available on the remote class but I do not want 
to / cannot modify the code for the remote class.

For example:

class User(Model):
...

class Permission(Model):
user_active = relationship(User, backpopulates_remote='permissions')


where the user_active field does not get created on Permission but the 
permissions field does get created on User? This comes up since I end up 
writing:

class Permission(Model):
user = relationship(User)
user_active = relationship(User, backref='permissions', primary_join=...
is_active...)

and I do want the user field on Permission and the permissions field on 
User, but I do not want the user_active, as it is extraneous.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] sqlacodegen 2.0.0 released

2018-05-20 Thread Alex Grönholm
After a quiet period of 3 years, I've now made a new major release. This 
release fixes a huge number of bugs and supports the latest SQLAlchemy and 
latest Python versions as well. It also adds support for Geoalchemy2.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Guidance regarding nested session usage

2017-08-12 Thread alex
Thank you very much for the guidance Jonathan and Mike. I've implemented 
nesting counting on my context manager and turned off autocommit and 
subtransactions. It looks like it's working well! 

Alex 

On Wednesday, August 9, 2017 at 5:14:09 PM UTC+1, al...@withplum.com wrote:
>
> Hey,
>
> I'd like some help regarding nested session usage please.
>
> I'm working on an application that has an API layer but also has a lot of 
> cron jobs (via Celery) and scripts. I'm trying to design the app in a way 
> that my "business" logic is contained and re-usable by any of these 
> interfaces. 
>
> The SQLAlchemy session scope is request/task-wide (i.e requests and tasks 
> remove the scoped session at the end) but I am doing explicit commits 
> instead of committing on request end because I sometimes have to deal with 
> complicated logic like creating/submitting transactions to payment 
> processors etc. 
>
> To start off, I use a context manager, much like the docs, which commits 
> or rollbacks as necessary. I then have a layer of actions, which are 
> considered "top-level" functions that can do a simple operation e.g update 
> something or a collection of operations i.e create and submit a 
> transaction. These actions use the context manager above to persist stuff 
> and I've opted to keep all session "usage" in these actions alone and 
> nowhere else in the code. Pretty soon, the need to use some of the simpler 
> actions inside other, bigger actions arose which, after reading the docs, 
> led me to turn autocommit=True and use session.begin(subtransactions=True). 
> Note that I don't want to use savepoints, I just want to be able to use my 
> actions inside other actions. The docs recommend that expire_on_commit is 
> set to False with autocommit, which I've done but that led to a couple of 
> situations where I was operating on out-of-date data hence I want to turn 
> expire_on_commit to True again. 
>
> My questions:
>
> (1) Does my application layout make sense from a SQLAlchemy perspective? 
> (2) What is the problem with expire_on_commit=True and autocommit=True?
> (3) I feel that, even with the context manager, the transaction boundaries 
> are still blurry because the developer does not know what will actually get 
> committed in the database. For example, if a previous part of the code 
> changed something, then called an action that commits the session, the 
> previous change will get committed as well. I've searched around and found 
> this: https://github.com/mitsuhiko/flask-sqlalchemy/pull/447 which 
> basically issues a rollback on entering the context manager to ensure that 
> only what is within the context manager will get committed. What do you 
> think of it? I can immediately see a problem where if I query for an object 
> before passing it to an action, then use the context manager, all the work 
> done on querying is lost since the object state is expired on rollback. 
>
> I'd appreciate any advice/input.
>
> Best,
> Alex
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Guidance regarding nested session usage

2017-08-09 Thread alex
Hey,

I'd like some help regarding nested session usage please.

I'm working on an application that has an API layer but also has a lot of 
cron jobs (via Celery) and scripts. I'm trying to design the app in a way 
that my "business" logic is contained and re-usable by any of these 
interfaces. 

The SQLAlchemy session scope is request/task-wide (i.e requests and tasks 
remove the scoped session at the end) but I am doing explicit commits 
instead of committing on request end because I sometimes have to deal with 
complicated logic like creating/submitting transactions to payment 
processors etc. 

To start off, I use a context manager, much like the docs, which commits or 
rollbacks as necessary. I then have a layer of actions, which are 
considered "top-level" functions that can do a simple operation e.g update 
something or a collection of operations i.e create and submit a 
transaction. These actions use the context manager above to persist stuff 
and I've opted to keep all session "usage" in these actions alone and 
nowhere else in the code. Pretty soon, the need to use some of the simpler 
actions inside other, bigger actions arose which, after reading the docs, 
led me to turn autocommit=True and use session.begin(subtransactions=True). 
Note that I don't want to use savepoints, I just want to be able to use my 
actions inside other actions. The docs recommend that expire_on_commit is 
set to False with autocommit, which I've done but that led to a couple of 
situations where I was operating on out-of-date data hence I want to turn 
expire_on_commit to True again. 

My questions:

(1) Does my application layout make sense from a SQLAlchemy perspective? 
(2) What is the problem with expire_on_commit=True and autocommit=True?
(3) I feel that, even with the context manager, the transaction boundaries 
are still blurry because the developer does not know what will actually get 
committed in the database. For example, if a previous part of the code 
changed something, then called an action that commits the session, the 
previous change will get committed as well. I've searched around and found 
this: https://github.com/mitsuhiko/flask-sqlalchemy/pull/447 which 
basically issues a rollback on entering the context manager to ensure that 
only what is within the context manager will get committed. What do you 
think of it? I can immediately see a problem where if I query for an object 
before passing it to an action, then use the context manager, all the work 
done on querying is lost since the object state is expired on rollback. 

I'd appreciate any advice/input.

Best,
Alex

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Composite column with null property

2017-05-11 Thread alex
Hello,

I have a Money composite column, comprised of an `amount` (Decimal) and a 
`currency` (String). Sometimes the amount needs to be NULL, but then I get 
an instance of Money(None, 'GBP'). Is there any way to force the composite 
to return None in this case?

Thanks,
Alex

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Custom secondary relation with composite primary keys

2017-05-04 Thread Alex Plugaru
It worked! Thanks a lot!

On Friday, 28 April 2017 18:49:40 UTC-7, Alex Plugaru wrote:
>
> Hello, 
>
> There are 3 tables: `*Account*`, `*Role*`, `*User*`. Both `*Role*` and `
> *User*` have a foreign key `*account_id*` that points to `*Account*`.
>
> A user can have multiple roles, hence the `*roles_users*` table which 
> acts as the secondary relation table between `*Role*` and `*User*`.
>
> The `*Account*` table is a tenant table for our app, it is used to 
> separate different customers.
>
> Note that all tables have (besides `*Account*`) have composite primary 
> keys with `*account_id*`. This is done for a few reasons, but let's say 
> it's done to keep everything consistent.
>
> Now if I have a simple secondary relationship (`*User.roles*` - the one 
> that is commented out) all works as expected. Well kind of.. it throws a 
> legitimate warning (though I believe it should be an error):
>
>
> SAWarning: relationship 'User.roles' will copy column role.account_id to 
> column roles_users.account_id, which conflicts with relationship(s): 
> 'User.roles' (copies user.account_id to roles_users.account_id). Consider 
> applying viewonly=True to read-only relationships, or provide a 
> primaryjoin condition marking writable columns with the foreign() 
> annotation.
>
> That's why I created the second relation `*User.roles*` - the one that is 
> not commented out. Querying works as expected which has 2 conditions on 
> join and everything. However I get this error when I try to save some roles 
> on the user:
>
> sqlalchemy.orm.exc.UnmappedColumnError: Can't execute sync rule for 
> source column 'roles_users.role_id'; mapper 'Mapper|User|user' does not 
> map this column.  Try using an explicit `foreign_keys` collection which 
> does not include destination column 'role.id' (or use a viewonly=True 
> relation).
>
>
> As far as I understand it, SA is not able to figure out how to save the 
> secondary because it has a custom `*primaryjoin*` and `*secondaryjoin*` 
> so it proposes to use `*viewonly=True*` which has the effect of just 
> ignoring the roles relation when saving the model.
>
> The question is how to save the roles for a user without having to do it 
> by hand (the example is commented out in the code). In the real app we have 
> many secondary relationships and we're saving them in many places. It would 
> be super hard to rewrite them all.
>
> Is there a solution to keep using `*User.roles = some_roles*` while 
> keeping the custom `*primaryjoin*` and `*secondaryjoin*` below?
>
> The full example using SA 1.1.9:
>
>
> from sqlalchemy import create_engine, Column, Integer, Text, Table, 
> ForeignKeyConstraint, ForeignKey, and_
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import foreign, relationship, Session
>
>
> Base = declarative_base()
>
>
>
>
> class Account(Base):
> __tablename__ = 'account'
> id = Column(Integer, primary_key=True)
>
>
>
>
> roles_users = Table(
> 'roles_users', Base.metadata,
> Column('account_id', Integer, primary_key=True),
> Column('user_id', Integer, primary_key=True),
> Column('role_id', Integer, primary_key=True),
>
>
> ForeignKeyConstraint(['user_id', 'account_id'], ['user.id', 
> 'user.account_id']),
> ForeignKeyConstraint(['role_id', 'account_id'], ['role.id', 
> 'role.account_id']),
> )
>
>
>
>
> class Role(Base):
> __tablename__ = 'role'
> id = Column(Integer, primary_key=True)
> account_id = Column(Integer, ForeignKey('account.id'), primary_key=
> True)
> name = Column(Text)
>
>
> def __str__(self):
> return ''.format(self.id, self.name)
>
>
>
>
> class User(Base):
> __tablename__ = 'user'
> id = Column(Integer, primary_key=True)
> account_id = Column(Integer, ForeignKey('account.id'), primary_key=
> True)
> name = Column(Text)
>
>
> # This works as expected: It saves data in roles_users
> # roles = relationship(Role, secondary=roles_users)
>
>
> # This custom relationship - does not work
> roles = relationship(
> Role,
> secondary=roles_users,
> primaryjoin=and_(foreign(Role.id) == roles_users.c.role_id,
>  Role.account_id == roles_users.c.account_id),
> secondaryjoin=and_(foreign(id) == roles_users.c.user_id,
>account_id == roles_users.c.account_id))
>
>
>
>
> engine = create_engine('sqlite:///')
> engine.echo = True
> Base.metadata.create_all(engine)
> session = Session(engine)
>
>
> # Create our account
> a = Account()
> session.add(a)
&g

[sqlalchemy] Re: Custom secondary relation with composite primary keys

2017-05-04 Thread Alex Plugaru
Hi Mike,

Thanks! I followed your advice and indeed it does work as expected. However 
I still get this warning:

SAWarning: relationship 'User.roles' will copy column role.account_id to 
column roles_users.account_id, which conflicts with relationship(s): 
'User.roles' (copies user.account_id to roles_users.account_id). Consider 
applying viewonly=True to read-only relationships, or provide a primaryjoin 
condition marking writable columns with the foreign() annotation.


I have many m2m tables and there is a huge output of these warnings every 
time which is super annoying. Is there a way to tell SA not to complain 
about this and only this? I would still like to see other warnings.

Again the full code:

from sqlalchemy import create_engine, Column, Integer, Text, Table, 
ForeignKeyConstraint, ForeignKey, and_
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import foreign, relationship, Session, joinedload, 
remote


Base = declarative_base()




class Account(Base):
__tablename__ = 'account'
id = Column(Integer, primary_key=True)




roles_users = Table(
'roles_users', Base.metadata,
Column('account_id', Integer, primary_key=True),
Column('user_id', Integer, primary_key=True),
Column('role_id', Integer, primary_key=True),


ForeignKeyConstraint(
['user_id', 'account_id'],
['user.id', 'user.account_id']),
ForeignKeyConstraint(
['role_id', 'account_id'],
['role.id', 'role.account_id']),
)




class Role(Base):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
account_id = Column(Integer, ForeignKey('account.id'), primary_key=True)
name = Column(Text)


def __str__(self):
return ''.format(self.id, self.name)




class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
account_id = Column(Integer, ForeignKey('account.id'), primary_key=True)
name = Column(Text)


# This works as expected: It saves data in roles_users
# roles = relationship(Role, secondary=roles_users)


# This custom relationship - does not work
roles = relationship(
Role,
secondary=roles_users,
primaryjoin=and_(id == roles_users.c.user_id,
 account_id == roles_users.c.account_id),
secondaryjoin=and_(Role.id == roles_users.c.role_id,
   Role.account_id == roles_users.c.account_id))




engine = create_engine('sqlite://')
# engine.echo = True
Base.metadata.create_all(engine)
session = Session(engine)


# Create our account
a1 = Account()
a2 = Account()
session.add(a1)
session.add(a2)
session.commit()


# Create roles
u_role = Role()
u_role.id = 1
u_role.account_id = a1.id
u_role.name = 'user'
session.add(u_role)


m_role = Role()
m_role.id = 2
m_role.account_id = a1.id
m_role.name = 'member'
session.add(m_role)


a2_role = Role()
a2_role.id = 3
a2_role.account_id = a2.id
a2_role.name = 'member'
session.add(a2_role)
session.commit()


# Create 1 user
u = User()
u.id = 1
u.account_id = a1.id
u.name = 'user'


# This does not work
u.roles = [u_role, m_role, a2_role]
session.add(u)
session.commit()


# Works as expected
# i = roles_users.insert()
# i = i.values([
# dict(account_id=a.id, role_id=u_role.id, user_id=u.id),
# dict(account_id=a.id, role_id=m_role.id, user_id=u.id),
# ])
# session.execute(i)


# re-fetch user from db
u = session.query(User).options(joinedload('roles')).first()
for r in u.roles:
print(r)


Thank you!
Alex.

On Friday, 28 April 2017 18:49:40 UTC-7, Alex Plugaru wrote:
>
> Hello, 
>
> There are 3 tables: `*Account*`, `*Role*`, `*User*`. Both `*Role*` and `
> *User*` have a foreign key `*account_id*` that points to `*Account*`.
>
> A user can have multiple roles, hence the `*roles_users*` table which 
> acts as the secondary relation table between `*Role*` and `*User*`.
>
> The `*Account*` table is a tenant table for our app, it is used to 
> separate different customers.
>
> Note that all tables have (besides `*Account*`) have composite primary 
> keys with `*account_id*`. This is done for a few reasons, but let's say 
> it's done to keep everything consistent.
>
> Now if I have a simple secondary relationship (`*User.roles*` - the one 
> that is commented out) all works as expected. Well kind of.. it throws a 
> legitimate warning (though I believe it should be an error):
>
>
> SAWarning: relationship 'User.roles' will copy column role.account_id to 
> column roles_users.account_id, which conflicts with relationship(s): 
> 'User.roles' (copies user.account_id to roles_users.account_id). Consider 
> applying viewonly=True to read-only relationships, or provide a 
> primaryjoin condition marking writable columns with the foreign() 
> annotation.
>
> That's why I created the second relation `*User.roles*` - the one that is 
> not commented out. Querying wo

[sqlalchemy] Custom secondary relation with composite primary keys

2017-04-28 Thread Alex Plugaru
, but I haven't gotten a response there yet 
so trying here too: 
https://stackoverflow.com/questions/43690944/sqalchemy-custom-secondary-relation-with-composite-primary-keys
Hope it's ok.


Thank you for your help,
Alex.


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Multi-table deletes with PostgreSQL

2016-09-16 Thread Alex Grönholm


I'm attempting to do a multi-table delete against PostgreSQL (psycopg2) with 
the following query:


session.query(ProductionItem).\
filter(Project.id == ProductionItem.project_id,

   Project.code.in_(projects),

   ProductionItem.external_id.is_(None)).\

delete(synchronize_session=False)


But it produces incorrect SQL. PostgreSQL requires the following syntax for 
this query:


DELETE FROM production_items USING projects WHERE 
production_items.project_id = project.id AND project.code IN (...) AND 
production_items.external_id IS NONE


Instead, I get this:


DELETE FROM production_items WHERE production_items.project_id = project.id 
AND project.code IN (...) AND production_items.external_id IS NONE


At which point PG complains:


sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) missing 
FROM-clause entry for table "projects"


>From initial research this seems like a missing feature. Would it be 
possible to add this to the postgresql dialect somehow? I might be willing 
to contribute the code in that case.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Rolling back the session in a context manager

2016-07-16 Thread Alex Grönholm

15.07.2016, 16:55, Mike Bayer kirjoitti:



On 07/15/2016 07:49 AM, Alex Grönholm wrote:

The documentation provides the following example snippet for using
sessions within a context manager:


so, back when I started putting "examples" in those docs, the idea was 
like, "hey, here's an *example*.  The Python programmer is free to do 
whatever they wish with these examples, and adjust as necessary".


That is, the reason something is an example and not a feature is, "you 
don't have to do it this way!  do it however you want".


That there's been a trend recently of examples being used as is, but 
then when the example lacks some feature they result in bug reports 
against the library itself (not this case, but a different case 
recently comes to mind), is sadly the opposite of what i had 
intended.  Of course examples can be modified to be reasonable, however.
The question here was raised in part by someone on IRC using the example 
code verbatim, and in part by myself having come up with nearly 
identical code – only with the "except" block missing. I am having odd 
random issues with all sessions randomly ending up in a partial rollback 
state and I can't figure out why. Restarting the application corrects 
the problem and it may not surface again for a couple weeks, so it's 
extremely difficult to debug. That's why I'm asking if I'm missing 
something important by leaving out the rollback() in the teardown phase.






@contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()

I've been wondering why there is an except: block there. Shouldn't
session.close() be enough? At least according to the documentation, the
active transaction is rolled back by default when the connection is
returned to the pool.



that is correct.  However .close() does not reset the state of the 
objects managed by the Session to be "expired", which arguably is 
necessary because without the transaction, you now have no idea what 
the state of the object's corresponding rows in the database are (this 
is what the whole "SQLAlchemy Session: In Depth" talk is about).


In reality, the above context manager is probably not that useful 
because it bundles the lifespan of the Session and the lifespan of a 
transaction together, and IMO an application should be more thoughtful 
than that.
Yep – I have similar code adapted to an "extension" where the teardown 
phase is run after the current RPC request is done.





This snippet has a second potential problem: what if the transaction is
in a bad state when exiting the block? Shouldn't session.commit() be
skipped then?


it's assumed that if anything is in "a bad state" then an exception 
would have been raised, you'd not reach commit().


Otherwise, if the idea is, "I'm using this context manager, but I'm 
not sure I want to commit at the end even though nothing was raised", 
well then this is not the context manager for you :). The example of 
contextmanagers for things like writing files and such sets up the 
convention of, "open resource, flush out all changes at the end if no 
exceptions".   That's what people usually want.


Yes, committing at the end by default is reasonable. But that wasn't 
what my question was about.


Like, if not session.is_active: session.commit()? Let's

say the user code catches IntegrityError but doesn't roll back.


if it doesn't re-raise, then we'd hit the commit() and that would 
probably fail also (depending on backend).  I don't see how that's 
different from:


with open("important_file.txt", "w") as handle:
handle.write("important thing #1")
handle.write("important thing #2")
try:
 important_thing_number_three = calculate_special_thing()
 handle.write(important_thing_number_three)
except TerribleException:
 log.info("oh crap! someone should fix this someday.")
handle.write("important thing #4")
I was thinking of a situation where the code doesn't use the session at 
all after catching the IntegrityError.







The

example code will then raise an exception when it tries to commit the
session transaction. Am I missing something?


On the better backends like Postgresql, it would.

If there's a use case you're looking for here, e.g. catch an 
IntegrityError but not leave the transaction, that's what savepoints 
are for.   There should be examples there.
No, my point was that if I catch the IntegrityError and don't raise 
anything from that, I don't intend to raise any exception afterwards 
either. In which case commit should not be attempted at all.




Now, if someone on IRC is using savepoi

[sqlalchemy] Rolling back the session in a context manager

2016-07-15 Thread Alex Grönholm
The documentation provides the following example snippet for using sessions 
within a context manager:

@contextmanagerdef session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()

I've been wondering why there is an except: block there. Shouldn't 
session.close() be enough? At least according to the documentation, the 
active transaction is rolled back by default when the connection is 
returned to the pool.
This snippet has a second potential problem: what if the transaction is in 
a bad state when exiting the block? Shouldn't session.commit() be skipped 
then? Like, if not session.is_active: session.commit()? Let's say the user 
code catches IntegrityError but doesn't roll back. The example code will 
then raise an exception when it tries to commit the session transaction. Am 
I missing something?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy 1.0.12 use inner JOIN instead of LEFT OUTER JOIN

2016-04-26 Thread Alex Dev
Thank you for your quick answer Mike.

Le mardi 26 avril 2016 00:28:10 UTC+2, Mike Bayer a écrit :
>
>
>
> well this usage above is wrong.  You can't have contains_eager() and 
> joinedload() along the same paths at the same time like that.   Also, 
> chaining joinedload() from contains_eager() is not a very typical thing 
> to do, it works, but joinedload() does not coordinate with 
> contains_eager() in any way, and it has no idea that you are using 
> outerjoin() to the left of it. 
>
> The two correct ways to do this are: 
>
>  print session.query(Plant).\ 
>  join(Plant.plant_dimensionsseries).\ 
>   
> options(contains_eager(Plant.plant_dimensionsseries).joinedload(PlantDimensionsseries.data_computed,
>  
>
> )) 
>

I cannot use a inner JOIN between plant and plant_dimensionsseries because 
a plant does not necessarily have any corresponding plant_dimensionsseries 
so give me different result. 
 

>
> and 
>
>  print session.query(Plant).\ 
>  outerjoin(Plant.plant_dimensionsseries).\ 
>   
> options(contains_eager(Plant.plant_dimensionsseries).joinedload(PlantDimensionsseries.data_computed,
>  
>
> innerjoin=False)) 
>

>
> or of course: 
>
>
>  print session.query(Plant).\ 
>  outerjoin(Plant.plant_dimensionsseries).\ 
>   
> options(joinedload(Plant.plant_dimensionsseries).joinedload(PlantDimensionsseries.data_computed))
>  
>
>
 
These two work. However I think that I cannot do that in my real query 
because it is a bit more complex. For the sake of simplifying my question, 
I reduced the query to the minimum but I realize it is now hard so see why 
I was using contains_eager or joinedload in different parts of the query. 
Indeed, I need to do more joins in the real query (see below).

I can have the expected result by adding the innerjoin=False but you wrote 
that this is a wrong usage so I wonder what a correct usage would be in my 
real case.

# -*- coding: utf-8 -*-
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import *


Base = declarative_base()

_plant_table = Table('plant', Base.metadata,
Column('id', Integer, primary_key=True)
)

_dimensionsseriestype_table = Table('dimensionsseriestype', Base.metadata,
Column('id', Integer, primary_key=True),
Column('sortindex', Integer),
Column('designation', String),
)

_plant_dimensionsseries_table = Table('plant_dimensionsseries', Base.
metadata,
Column('plant_id', Integer, primary_key=True),
Column('dimensionsseriestype_id', Integer, primary_key=True),
ForeignKeyConstraint(['plant_id'], ['plant.id']),
ForeignKeyConstraint(['dimensionsseriestype_id'], [
'dimensionsseriestype.id']),
)

_view_plant_dimensionsseries_table = Table('view_plant_dimensionsseries', 
Base.metadata,
Column('plant_id', Integer, primary_key=True),
Column('dimensionsseriestype_id', Integer, primary_key=True),
ForeignKeyConstraint(
['plant_id', 'dimensionsseriestype_id'],
['plant_dimensionsseries.plant_id', 
'plant_dimensionsseries.dimensionsseriestype_id'])
)

class Plant(Base):
__table__ = _plant_table

class Dimensionsseriestype(Base):
__table__ = _dimensionsseriestype_table
_id = __table__.c.id

class PlantDimensionsseries(Base):
__table__ = _plant_dimensionsseries_table
_plant_id = __table__.c.plant_id
_dimensionsseriestype_id = __table__.c.dimensionsseriestype_id

plant = relationship('Plant',
innerjoin=True,
backref=backref('plant_dimensionsseries'))

dimensionsseriestype = relationship('Dimensionsseriestype',
innerjoin=True,
backref=backref('plant_dimensionsseries'))

class PlantDimensionsseriesDataComputed(Base):
__table__ = _view_plant_dimensionsseries_table
_plant_id = __table__.c.plant_id
_dimensionsseriestype_id = __table__.c.dimensionsseriestype_id

# One-to-one relationship
dimensionsseries = relationship('PlantDimensionsseries',
innerjoin=True,
backref=backref('data_computed',
innerjoin=True))

if __name__ == '__main__':

engine = create_engine(
'postgresql://nurseryacme_employee@localhost:5432/nurseryacme')
Session = sessionmaker(bind=engine)
session = Session()

# real query (almost)
print session.query(Plant).\
outerjoin(Plant.plant_dimensionsseries).\
outerjoin(Dimensionsseriestype, PlantDimensionsseries.
dimensionsseriestype).\
options(contains_eager(Plant.plant_dimensionsseries, 
PlantDimensionsseries.dimensionsseriestype)).\
options(joinedload(Plant.plant_dimensionsseries, 
PlantDimensionsseries.data_computed)).\
order_by(Dimensionsseriestype.sortindex)


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, 

[sqlalchemy] SQLAlchemy 1.0.12 use inner JOIN instead of LEFT OUTER JOIN

2016-04-25 Thread Alex Dev
Hello,

I have a broken query when migrating from SQLAlchemy 0.9.4 to 1.0.12. It 
seems to be linked to a behavioral change in the ORM 
(http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html#right-inner-join-nesting-now-the-default-for-joinedload-with-innerjoin-true)

Here is simplified version of the code:

# -*- coding: utf-8 -*-
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import *


Base = declarative_base()

_plant_table = Table('plant', Base.metadata,
Column('id', Integer, primary_key=True)
)

_plant_dimensionsseries_table = Table('plant_dimensionsseries', Base.
metadata,
Column('plant_id', Integer, primary_key=True),
Column('dimensionsseriestype_id', Integer, primary_key=True),
ForeignKeyConstraint(['plant_id'], ['plant.id'])
)

_view_plant_dimensionsseries_table = Table('view_plant_dimensionsseries', 
Base.metadata,
Column('plant_id', Integer, primary_key=True),
Column('dimensionsseriestype_id', Integer, primary_key=True),
ForeignKeyConstraint(
['plant_id', 'dimensionsseriestype_id'],
['plant_dimensionsseries.plant_id', 
'plant_dimensionsseries.dimensionsseriestype_id'])
)

class Plant(Base):
__table__ = _plant_table

class PlantDimensionsseries(Base):
__table__ = _plant_dimensionsseries_table
_plant_id = __table__.c.plant_id
_dimensionsseriestype_id = __table__.c.dimensionsseriestype_id

plant = relationship('Plant',
innerjoin=True,
backref=backref('plant_dimensionsseries'))

class PlantDimensionsseriesDataComputed(Base):
__table__ = _view_plant_dimensionsseries_table
_plant_id = __table__.c.plant_id
_dimensionsseriestype_id = __table__.c.dimensionsseriestype_id

# One-to-one relationship
dimensionsseries = relationship('PlantDimensionsseries',
innerjoin=True,
backref=backref('data_computed',
innerjoin=True))

if __name__ == '__main__':

engine = create_engine(
'postgresql://nurseryacme_employee@localhost:5432/nurseryacme')
Session = sessionmaker(bind=engine)
session = Session()

# query 1:
# SQLAlchemy 0.9.4: Correct SQL generated
# SQLAlchemy 1.0.12: Wrong SQL generated, a inner JOIN is used instead 
of a LEFT OUTER JOIN between plant_dimensionsseries and 
view_plant_dimensionsseries
print session.query(Plant).\
outerjoin(Plant.plant_dimensionsseries).\
options(contains_eager(Plant.plant_dimensionsseries)).\
options(joinedload(Plant.plant_dimensionsseries, 
PlantDimensionsseries.data_computed))

# query 2:
# SQLAlchemy 1.0.12: Correct SQL generated
print session.query(Plant).\
outerjoin(Plant.plant_dimensionsseries).\
options(contains_eager(Plant.plant_dimensionsseries)).\
options(joinedload(Plant.plant_dimensionsseries, 
PlantDimensionsseries.data_computed, innerjoin=False))


Result with SQLAlchemy 0.9.4:
# query 1
SELECT ...
FROM plant LEFT OUTER JOIN plant_dimensionsseries ON plant.id = 
plant_dimensionsseries.plant_id *LEFT OUTER JOIN* 
view_plant_dimensionsseries AS view_plant_dimensionsseries_1 ON 
plant_dimensionsseries.plant_id = view_plant_dimensionsseries_1.plant_id 
AND plant_dimensionsseries.dimensionsseriestype_id = 
view_plant_dimensionsseries_1.dimensionsseriestype_id
# query 2
SELECT ...
FROM plant LEFT OUTER JOIN plant_dimensionsseries ON plant.id = 
plant_dimensionsseries.plant_id *LEFT OUTER JOIN* 
view_plant_dimensionsseries AS view_plant_dimensionsseries_1 ON 
plant_dimensionsseries.plant_id = view_plant_dimensionsseries_1.plant_id 
AND plant_dimensionsseries.dimensionsseriestype_id = 
view_plant_dimensionsseries_1.dimensionsseriestype_id


Result with SQLAlchemy 1.0.12:
# query 1
SELECT ...
FROM plant LEFT OUTER JOIN plant_dimensionsseries ON plant.id = 
plant_dimensionsseries.plant_id *JOIN* view_plant_dimensionsseries AS 
view_plant_dimensionsseries_1 ON plant_dimensionsseries.plant_id = 
view_plant_dimensionsseries_1.plant_id AND 
plant_dimensionsseries.dimensionsseriestype_id 
= view_plant_dimensionsseries_1.dimensionsseriestype_id
# query 2
SELECT ...
FROM plant LEFT OUTER JOIN plant_dimensionsseries ON plant.id = 
plant_dimensionsseries.plant_id *LEFT OUTER JOIN* 
view_plant_dimensionsseries AS view_plant_dimensionsseries_1 ON 
plant_dimensionsseries.plant_id = view_plant_dimensionsseries_1.plant_id 
AND plant_dimensionsseries.dimensionsseriestype_id = 
view_plant_dimensionsseries_1.dimensionsseriestype_id

In query 1 under SQLAlchemy 1.0.12, the query discard many rows due the 
JOIN chained to the LEFT OUTER JOIN and this precisely what SQLAlchemy 
wanted to avoid if I refer to the description of "Right-nested inner joins 
available in joined eager loads" 
(http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_09.html#feature-2976).

I can fix the query 1 by adding the innerjoin=False as in query 

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-28 Thread Alex Hall
That would certainly work. :) Would that offer any benefits over
pyodbc, since I wouldn't have the mapping (which was taking all the
time I was spending with SA)?

On 3/25/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/25/2016 05:20 PM, Alex Hall wrote:
>> Hi all,
>> Since SA was proving to be difficult to get working, and I was
>> spending way more time just trying to get it working than I was
>> actually running queries and outputting the results, I thought I'd
>> give Pyodbc a shot. Within a couple days, the query was working
>> perfectly. I'll post it below, as I'd be curious how this could be
>> made easier using SA.
>
>
> like this:
>
> from sqlalchemy import create_engine
>
> engine = create_engine("mssql+pyodbc://dsn")
>
> result = engine.execute(itemsQuery)
>
>
>
>
>
>   I don't know that I'll use SA for this project
>> since it's working so well in Pyodbc, but I'm curious all the same.
>> So, no hurry on this, it's only for my own interest. Anyway, the query
>> I finally came up with is below. It returns multiple rows for items
>> that have attributes, but other than that it works great. A for loop
>> with a check on the current ID takes care of making multiple rows for
>> the same item into a single row (recall this is all going to a
>> spreadsheet).
>>
>>
>> itemsQuery = """
>>   select items.itm_id as itemID, items.itm_proddesc as itemTitle,
>> items.itm_num as itemNumber, items.itm_listprice1 as msrp,
>> items.itm_dftuom as itemUnitOfMeasure, items.itm_manufitem as
>> itemManufacturer, items.itm_vendornum as itemVendorNumber,
>>items.itm_weight as itemWeight, items.itm_width as itemWidth,
>> items.itm_length as itemLength, items.itm_height as itemHeight,
>>attachments.description as description,
>>imagePaths.imagePath1 as imagePath1, imagePaths.imagePath2 as
>> imagePath2, imagePaths.imagePath3 as imagePath3,
>>attributes.attributeName as attributeName, attributes.attributeValue
>> as attributeValue,
>>vendor.vendorName as vendorName
>>from (
>> select itm_id, itm_proddesc, itm_num, itm_vendornum,
>> itm_listprice1, itm_length, itm_width, itm_height, itm_weight,
>> itm_dftuom, itm_manufitem
>> from item
>> where itm_webflag <> 'Y' and itm_suspflag <> 'Y'
>>) items
>>left outer join (
>> select distinct attr_desc as attributeName, attr_value as
>> attributeValue, itm_id
>> from attributevalueassign
>> join attribute
>> on attribute.attr_id = attributevalueassign.attr_id
>> join attributevalue
>> on attributevalue.attr_value_id = attributevalueassign.attr_value_id
>> where attributevalueassign.itm_id = itm_id
>>) attributes
>> on attributes.itm_id = items.itm_id
>>left outer join (
>> select PVUS15 as vendorName, PVVNNO as vendorNumber, itm_id
>> from VENDR
>> join item on item.itm_id = itm_id
>>) vendor
>> on vendor.vendorNumber = items.itm_vendornum and vendor.itm_id =
>> items.itm_id
>>left outer join (
>>   select attach_text.att_text as description, itm_id
>>from assignment
>> join attachment on attachment.att_id = assignment.att_id
>> join attach_text on attach_text.att_id = assignment.att_id
>> where assignment.itm_id = itm_id
>>) attachments
>>on attachments.itm_id = items.itm_id
>>left outer join (
>> select attachment.att_path as imagePath1, attachment.att_path2 as
>> imagePath2, attachment.att_path3 as imagePath3, itm_id
>> from assignment
>> join attachment on attachment.att_id = assignment.att_id
>>) imagePaths
>>on imagePaths.itm_id = items.itm_id
>> """
>>
>>
>> On 3/21/16, Simon King <si...@simonking.org.uk> wrote:
>>> Can you extract your code into a single standalone script that
>>> demonstrates
>>> the problem? This should be possible even with automap; the script can
>>> start by creating just the tables that are involved in this problem
>>> (ideally in an in-memory sqlite db), then use automap to map classes to
>>> those tables.
>>>
>>> Simon
>>>
>>> On Mon, Mar 21, 2016 at 3:12 PM, Alex Hall <ah...@autodist.com> wrote:
>>>
>>>> Wow, thanks guys, especially for the sample code! I'm trying to use
>>>> the example (and fully understand it at the same time) but am running
>>>> into an error. This is the same error that made m

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-25 Thread Alex Hall
Hi all,
Since SA was proving to be difficult to get working, and I was
spending way more time just trying to get it working than I was
actually running queries and outputting the results, I thought I'd
give Pyodbc a shot. Within a couple days, the query was working
perfectly. I'll post it below, as I'd be curious how this could be
made easier using SA. I don't know that I'll use SA for this project
since it's working so well in Pyodbc, but I'm curious all the same.
So, no hurry on this, it's only for my own interest. Anyway, the query
I finally came up with is below. It returns multiple rows for items
that have attributes, but other than that it works great. A for loop
with a check on the current ID takes care of making multiple rows for
the same item into a single row (recall this is all going to a
spreadsheet).


itemsQuery = """
 select items.itm_id as itemID, items.itm_proddesc as itemTitle,
items.itm_num as itemNumber, items.itm_listprice1 as msrp,
items.itm_dftuom as itemUnitOfMeasure, items.itm_manufitem as
itemManufacturer, items.itm_vendornum as itemVendorNumber,
  items.itm_weight as itemWeight, items.itm_width as itemWidth,
items.itm_length as itemLength, items.itm_height as itemHeight,
  attachments.description as description,
  imagePaths.imagePath1 as imagePath1, imagePaths.imagePath2 as
imagePath2, imagePaths.imagePath3 as imagePath3,
  attributes.attributeName as attributeName, attributes.attributeValue
as attributeValue,
  vendor.vendorName as vendorName
  from (
   select itm_id, itm_proddesc, itm_num, itm_vendornum,
itm_listprice1, itm_length, itm_width, itm_height, itm_weight,
itm_dftuom, itm_manufitem
   from item
   where itm_webflag <> 'Y' and itm_suspflag <> 'Y'
  ) items
  left outer join (
   select distinct attr_desc as attributeName, attr_value as
attributeValue, itm_id
   from attributevalueassign
   join attribute
   on attribute.attr_id = attributevalueassign.attr_id
   join attributevalue
   on attributevalue.attr_value_id = attributevalueassign.attr_value_id
   where attributevalueassign.itm_id = itm_id
  ) attributes
   on attributes.itm_id = items.itm_id
  left outer join (
   select PVUS15 as vendorName, PVVNNO as vendorNumber, itm_id
   from VENDR
   join item on item.itm_id = itm_id
  ) vendor
on vendor.vendorNumber = items.itm_vendornum and vendor.itm_id = items.itm_id
  left outer join (
 select attach_text.att_text as description, itm_id
  from assignment
   join attachment on attachment.att_id = assignment.att_id
   join attach_text on attach_text.att_id = assignment.att_id
   where assignment.itm_id = itm_id
  ) attachments
  on attachments.itm_id = items.itm_id
  left outer join (
   select attachment.att_path as imagePath1, attachment.att_path2 as
imagePath2, attachment.att_path3 as imagePath3, itm_id
   from assignment
   join attachment on attachment.att_id = assignment.att_id
  ) imagePaths
  on imagePaths.itm_id = items.itm_id
"""


On 3/21/16, Simon King <si...@simonking.org.uk> wrote:
> Can you extract your code into a single standalone script that demonstrates
> the problem? This should be possible even with automap; the script can
> start by creating just the tables that are involved in this problem
> (ideally in an in-memory sqlite db), then use automap to map classes to
> those tables.
>
> Simon
>
> On Mon, Mar 21, 2016 at 3:12 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Wow, thanks guys, especially for the sample code! I'm trying to use
>> the example (and fully understand it at the same time) but am running
>> into an error. This is the same error that made me look for a way
>> other than this last week.
>>
>> sqlalchemy.exc.InvalidRequestError: when initializing mapper
>> Mapper|assignmentTable|assignment, expression 'item' failed to to
>> locate an item (name 'item' is not defined). If this is a class name,
>> consider adding this relationship() to the
>>  class after both dependent classes
>> have been defined.
>>
>> This all starts from the line where my query begins:
>>
>> items = session.query(itemTable)\
>>
>> Again, I'm using automap. I put the class definitions in the same
>> place I put my vendor table definition last week, where it worked
>> perfectly. That's just after I set
>> base = automap_base()
>> but before I reflect anything. I can paste the full code if you want,
>> but it's pretty long.
>>
>> On 3/17/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>> >
>> >
>> > On 03/17/2016 03:11 PM, Alex Hall wrote:
>> >> Hello all,
>> >> It seems like I can't go a day without running into some kind of wall.
>> >> This one is a conceptual one regarding foreign keys. I have to somehow
>> >> get the same FK column in ta

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-21 Thread Alex Hall
Wow, thanks guys, especially for the sample code! I'm trying to use
the example (and fully understand it at the same time) but am running
into an error. This is the same error that made me look for a way
other than this last week.

sqlalchemy.exc.InvalidRequestError: when initializing mapper
Mapper|assignmentTable|assignment, expression 'item' failed to to
locate an item (name 'item' is not defined). If this is a class name,
consider adding this relationship() to the
 class after both dependent classes
have been defined.

This all starts from the line where my query begins:

items = session.query(itemTable)\

Again, I'm using automap. I put the class definitions in the same
place I put my vendor table definition last week, where it worked
perfectly. That's just after I set
base = automap_base()
but before I reflect anything. I can paste the full code if you want,
but it's pretty long.

On 3/17/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/17/2016 03:11 PM, Alex Hall wrote:
>> Hello all,
>> It seems like I can't go a day without running into some kind of wall.
>> This one is a conceptual one regarding foreign keys. I have to somehow
>> get the same FK column in table A pointing to IDs in tables B and C.
>
> So a real foreign key constraint is not capable of this.  Repurposing a
> single column to occasionally point to one table or another is a famous
> anti-pattern I've spoke of much (warning, this is *extremely* old, but
> the general idea still holds):
>
> http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-sqlalchemy/
>
>
> I have an updated version of all the various "polymoprhic association"
> examples in SQLAlchemy itself at
> http://docs.sqlalchemy.org/en/rel_1_0/orm/examples.html#module-examples.generic_associations.
>
>   This includes the "single column pointing to multiple tables" hack, as
> well as three other versions of the same business object geometry which
> preserve relational integrity within the schema design.
>
>>
>> At one person's suggestion, I'm making classes for my tables, even
>> though I'm using automap. This is to let me stop doing a ton of joins,
>> making querying much easier... I hope! I'm defining all the foreign
>> keys between my tables manually. For instance:
>>
>> class item(base):
>>   __tablename__ = "item"
>>   itm_id = Column(Integer, primary_key=True)
>>   vendornum = Column(String, ForeignKey(VENDR.PVVNNO))
>>
>> class vendorTable(base):
>>   __tablename__ = "VENDR"
>>   PVVNNO = Column(String, primary_key=True)
>>
>> If I've understood correctly, I'll now be able to say
>> item.vendornum.vendor_full_name
>> to get the vendor's full name for any item.
>>
>> Here's the problem. Items have attachments, and attached text,
>> respectively held in attach and attach_text tables. Binding them to
>> items is a table called assignment. Assignment is pretty
>> straightforward, with an itm_id and an attachment id (att_id). The
>> trouble is that this att_id occurs in both attach and attach_text. I
>> can make att_id a foreign key to one table or the other, but I'm not
>> sure how to make it go to both tables.
>
> the "generic_fk" example illustrates a pattern for working with this.
>
> Getting this all to work with automap is another layer of complexity,
> you certainly want all of this part of it laid out before you reflect
> the rest of the database columns.
>
>
>>
>> class assignmentTable(base):
>>   __tablename__ = "assignment"
>>   itm_id = Column(Integer, ForeignKey(item.itm_id))
>>   #the following column has to point to attach_text.att_id AS WELL
>>att_id = Column(Integer, ForeignKey(attachment.att_id))
>>   seq_num = Column(Integer)
>>   asn_primary = Column(Integer, nullable=True)
>>
>> class attachmentTable(base):
>>   __tablename__ = "attachment"
>>   att_id = Column(Integer, primary_key=True)
>>
>> class attachmentTextTable(base):
>>   __tablename__ = "attach_text"
>>   att_id = Column(Integer, primary_key=True)
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Outer joins?

2016-03-19 Thread Alex Hall
That would be the simplest. Having something so inefficient just bugs me. :)

I'm using MSSQL, so limit() works. Would yield_per() help here, or is
that for something different? Even if it didn't help local memory, but
just kept the load on the DB server down, that would be good.

On 3/16/16, Christopher Lee  wrote:
> It sounds like you should just fire it up with the outer joins and watch
> memory on the box.  If it gets too high, or crashes entirely, then you can
> look into different approaches.  For example, you could keep the outer
> joins, but paginate your query so that it is only pulling a subset of the
> rows from your main table (but fully joining against the secondary
> tables).  Just one caveat... if you are using MySQL, then LIMIT and OFFSET
> are not your friends; you'll want to find a different pagination mechanism.
>
> On Wed, Mar 16, 2016 at 10:29 AM, Jonathan Vanasco 
> wrote:
>
>> We all inherit less-than-ideal situations.
>>
>> If this is running once a day and isn't impacting performance or other
>> work, I wouldn't really worry about the huge join matrix.  It sounds like
>> the current solution is "good enough".  In a few weeks or months you'll
>> be
>> better acquainted with SqlAlchemy and Sql in general and can revisit.
>>
>> In terms of your 15minute script: When you can use subqueries, filters
>> and
>> `load_only` for certain columns, your backend will generate a smaller
>> matrix and there will be a less data transferred "over the wire".
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] joins instead of filters remove attributes of results

2016-03-19 Thread Alex Hall
Hello all,
I'm running a different query than yesterday. Before, I had something like:

items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\
.filter(attachmentTable.itm_id == itemTable.itm_id)\
#and so on, a bunch of .filter calls

Then, in the loop iterating over the results, I could do this:

for result in queryResults:
 itemID = result.item.itm_id

Now that I'm using a bunch of outer left joins, that code is suddenly
not working. I get an error when I say
result.item.itm_id
AttributeError: 'item' object has no attribute 'item'

The problem is that my query starts out with only one table passed to
session.query(), not all of them. Thus my result is of type 'item',
which is the table passed in. That would be okay, except that I need
to access values of other tables in the result, so even if I change
id = result.item.itm_id
to
id = result.itm_id
When I then say
description = result.attach_text.att_value
AttributeError: 'item' object has no attribute 'attach_text'

I know why it doesn't. What I don't know is how to get my query
results to hold all the information from all the tables, or how to
access it if they do already, but in a different way than before. My
new query is this:

items = session.query(itemTable)\
.outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
.outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.outerjoin(attachmentTextTable, assignmentTable.att_id ==
attachmentTextTable.att_id)\
.outerjoin(attributeValueAssignmentTable,
attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.outerjoin(attributeTable, attributeTable.attr_id ==
attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)\
.yield_per(1000)

I've also tried the same query, but with the first line changed to:
items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\

The problem here is that, while result.item.* works as expected, other
tables don't. For instance, result.attach_text.att_value yields an
AttributeError, 'None' type object has no attribute att_value.
Clearly, the other tables are in the result, but they're all None. I
expected something like that, and only added them back in to see if it
might help, but since I call query().outerjoin() I didn't think it
would work.

I should note that I renamed most of the tables by assigning variables
to base.classes.tableName, which is why I'm using "itemTable" here,
but in getting attributes of results I use just "item". The 'item'
table is called 'item', but I assigned it to a variable called
'itemTable', just for clarity in the script.

Is there a way to access the values of a query like this? At the very
least, is there a way I can print out all the objects the result
object has, so I can work out what to do? Thanks for any help!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Defining relationships (was: joins instead of filters remove attributes of results)

2016-03-18 Thread Alex Hall
If I define all the relationships as suggested, I could do
result.itm_id
or
result.attribute_value
and it would all work? Would I still need to specify, in my initial
query, things like
.filter(itemTable.itm_id = attachmentAssignmentTable.itm_id\
.filter(attachmentTable.att_id = attachmentAssignmentTable.att_id)

to get all attachments assigned to a given item? I'll read more about
this and play with it, but I wanted to ask here as well in case
someone sees that the design of this database will cause problems with
relationships.

On 3/16/16, Simon King <si...@simonking.org.uk> wrote:
> On Wed, Mar 16, 2016 at 1:07 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Hello all,
>> I'm running a different query than yesterday. Before, I had something
>> like:
>>
>> items = session.query(itemTable, attachmentTable, attachmentTextTable,
>> assignmentTable, attributeTable, attributeValueTable,
>> attributeValueAssignmentTable, vendorTable)\
>> .filter(attachmentTable.itm_id == itemTable.itm_id)\
>> #and so on, a bunch of .filter calls
>>
>> Then, in the loop iterating over the results, I could do this:
>>
>> for result in queryResults:
>>  itemID = result.item.itm_id
>>
>
> Because you wrote "session.query(itemTable, attachmentTable,
> attachmentTextTable)", the results that you get back from the query are
> like a tuple with 3 items corresponding to the 3 tables that you queries.
> result[0] would be the data from itemTable, result[1] is from
> attachmentTable, and result[2] is from attachmentTextTable. It also
> supports name-based access, which is why you were able to refer to
> "result.item" and "result.attach_text".
>
>
>>
>> Now that I'm using a bunch of outer left joins, that code is suddenly
>> not working. I get an error when I say
>> result.item.itm_id
>> AttributeError: 'item' object has no attribute 'item'
>>
>> The problem is that my query starts out with only one table passed to
>> session.query(), not all of them. Thus my result is of type 'item',
>> which is the table passed in. That would be okay, except that I need
>> to access values of other tables in the result, so even if I change
>> id = result.item.itm_id
>> to
>> id = result.itm_id
>> When I then say
>> description = result.attach_text.att_value
>> AttributeError: 'item' object has no attribute 'attach_text'
>>
>>
> First, it helps to be precise about your terminology. SQLAlchemy
> distinguishes between the object representing a table, and a class that you
> are mapping to that table. You've talked about using automapper in the
> past, so I assume you are passing a mapped class, not a table, to
> session.query().
>
> When you pass a single mapped class, the results you get back are instances
> of that class.
>
>
>
>> I know why it doesn't. What I don't know is how to get my query
>> results to hold all the information from all the tables, or how to
>> access it if they do already, but in a different way than before. My
>> new query is this:
>>
>> items = session.query(itemTable)\
>> .outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
>> .outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
>> .filter(assignmentTable.att_id == attachmentTable.att_id)\
>> .outerjoin(attachmentTextTable, assignmentTable.att_id ==
>> attachmentTextTable.att_id)\
>> .outerjoin(attributeValueAssignmentTable,
>> attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
>> .outerjoin(attributeTable, attributeTable.attr_id ==
>> attributeValueAssignmentTable.attr_id)\
>> .filter(attributeValueTable.attr_value_id ==
>> attributeValueAssignmentTable.attr_value_id)\
>> .yield_per(1000)
>>
>> I've also tried the same query, but with the first line changed to:
>> items = session.query(itemTable, attachmentTable, attachmentTextTable,
>> assignmentTable, attributeTable, attributeValueTable,
>> attributeValueAssignmentTable, vendorTable)\
>>
>> The problem here is that, while result.item.* works as expected, other
>> tables don't. For instance, result.attach_text.att_value yields an
>> AttributeError, 'None' type object has no attribute att_value.
>> Clearly, the other tables are in the result, but they're all None. I
>> expected something like that, and only added them back in to see if it
>> might help, but since I call query().outerjoin() I didn't think it
>> would work.
>>
>> I should note that I renamed most of the tables by assigning variables
>> to base.classes.tableName, which is why I'm using "itemTable&

Re: [sqlalchemy] Re: Outer joins?

2016-03-16 Thread Alex Hall

> On Mar 16, 2016, at 03:23, Jonathan Vanasco  wrote:
> 
> The database design you have is less than perfect.

I didn't make it, I came in long after it had been set up and now have to work 
with it. I can't re-do anything. They did it this way so that, for instance, a 
single attribute or attachment could apply to multiple items. If a thousand 
items are "large", and ten thousand have a "size" attribute, "size" and 'large" 
can both be written once instead of thousands of times. The problem is that it 
makes this hard to query, at least for someone not very experienced in DBA.

> The goal of having to reformat the relational DB into a CSV is less than 
> perfect.

The CSV is to give product details to our resellers in a format they can import 
automatically. As you say, flattening a relational database into what is 
essentially a single table isn't ideal, but it's what my work has always done 
so it's what I have to do. We have this running as a very convoluted SQL job I 
didn't write, with a ton of temporary tables, repeated code, and other fun 
things that make it hard to figure out what's going on. I know Python better 
than SQL, even if I have to learn SA and some DB concepts along the way, so 
this will be far easier to maintain once I get it working.

> 
> If I were you, I would think about 3 questions:
> 
> 1. How often do you have to run this?
Once a day or less. There's another script I'll eventually have to write that 
runs every fifteen minutes, but it has far less columns. Still, I'll meet to be 
able to grab items that lack related information, and the related information 
for items that have it.

> 2. Does it take too long?
No, I don't think it will. Even with the query that fails to get all items, it 
only takes 30 seconds total. I'm okay with it taking a few minutes.
> 3. Does it use up too much DB/Python memory?
That I don't know.
> 
> If this isn't a resource issue, and a 1x a day task... don't worry about it.  
> Let the non-optimal code run.
> 
> If you need to run this every 5 minutes, then I'd start to worry.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to sqlalchemy@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/sqlalchemy 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Outer joins?

2016-03-15 Thread Alex Hall
Thanks guys. I'm using automap, but I'm not completely sure how much
that gives me for free. Yes, these tables are big, and the resulting
set would be worrying large (potentially 5*20, and that's without
the attributes and attachments, plus their assignment and values
tables). I've switched to left outerjoins, hoping that that will keep
things smaller.

My original query is below. This works fine, but only gets items that
have vendor, attributes, and attachments. Items may have some or none
of these, as I was just informed today. As you can probably see,
item.itm_id ties everything together. It's used as the key for
assignmentTable, which holds all the attachments associated with the
item and uses att_id to index into attachmentTable and
attachmentTextTable. A similar relationship exists for attributes.

old_items = session.query(itemTable, attachmentTable,
attachmentTextTable, assignmentTable, attributeTable,
attributeValueTable, attributeValueAssignmentTable, vendorTable)\
.filter(vendorTable.PVVNNO == itemTable.itm_vendornum)\
.filter(assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.filter(assignmentTable.att_id == attachmentTextTable.att_id)\
.filter(attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.filter(attributeTable.attr_id == attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)

My next thought was to break this down into multiple queries:

allItems = session.query(items)\
.filter(items.itm_webflag != 'N', items.itm_suspflag != 'Y')

itemVendors = allItems.query(vendorTable).filter(vendorTable.PVVNNO ==
itemTable.itm_vendornum)

attachments = allItems.query(assignmentTable, attachmentTable,
attachmentTextTable)\
.filter(assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.filter(assignmentTable.att_id == attachmentTextTable.att_id)\

attributes = allItems.query(attributeTable, attributeValueTable,
attributeValueAssignmentTable)\
.filter(attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.filter(attributeTable.attr_id == attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)

The problem was, I couldn't work out how to put them together. Given
some item ID, how would I access that item's attributes or attachments
without making tons of queries back to the database?

Currently, I'm using this query, which I haven't yet even tested:

items = session.query(itemTable)\
.outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
.outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.outerjoin(attachmentTextTable, assignmentTable.att_id ==
attachmentTextTable.att_id)\
.outerjoin(attributeValueAssignmentTable,
attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.outerjoin(attributeTable, attributeTable.attr_id ==
attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)


On 3/15/16, Christopher Lee  wrote:
> Note that if your items have a lot of attributes and attachments, an
> outer-join will return a multiplicatively-large result set.  That will get
> boiled down into a sane number of objects by the SqlAlchemy ORM, but your
> performance might be ugly in terms of I/O to your database, or the
> processing time it takes to allocate the entire result set.  If the related
> tables are small, then querying all the data in a single query can be a lot
> faster.
>
> Anyway, it would help to see some code and know if you are using just the
> Core, or if you are using the ORM and have relationships defined.  You can
> pretty easily force an outer join on a relationship by setting the eager
> loading argument to "joined".  If you are using queries directly, then
> Jonathan's suggestions above should get you where you need to go.
>
>
>
> On Tue, Mar 15, 2016 at 10:17 AM, Jonathan Vanasco 
> wrote:
>
>> The ORM has an `outerjoin` method on queries:
>>
>> http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.outerjoin
>>
>> You can also pass "isouter=True" to `join`
>>
>> http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.join
>>
>> The core supports an outerjoin in both variations as well:
>>
>> http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=outer#sqlalchemy.sql.expression.Join.join
>>
>> http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=outer#sqlalchemy.sql.expression.Join.outerjoin
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this 

[sqlalchemy] Outer joins?

2016-03-15 Thread Alex Hall
Hi all,
I need to pull data from a bunch of tables, and I *think* outer joins
are the way to do it. However, I can't find much on SA's support for
outer joins.

What I'm trying to do is pull all items from the Items table, as well
as associated attachments and attributes if an item is tied to either
one. An item may or may not have attributes (stored in another table),
and may or may not have attachments (stored in yet another table).
Getting items that have both is what I already have working, but no
one told me that items can lack attachments and attributes until
today. I'm trying to work out how to do this, especially given that an
item could have attributes but no attachments, or attachments but no
attributes.

This is why an outer join seems to make the most sense. That way, I
get all items, with their attributes and attachments in place. If the
item lacks either or both, though, those values will simply be null. I
can't use query.filter, because filtering will exclude items without
these extra bits of data. In fact, I use filtering right now, and I
get 24,000 results; I should get over 65,000. I think I want something
like:

select * from items
 left outer join attributes
 on attribute.item_id = items.item_id
 left outer join attachments
 on attachment.item_id = items.item_id
 where items.flag <> 'Y'

If I'm thinking about this right, that query will do the job. I'll get
items no matter what, so long as the item's flag is not 'Y', and if
the item has more data associated with it, that will come along as
well. Multiple rows per item (such as four rows for item 1 if item1
has 2 attachments and 2 attributes) aren't a problem. The loop I have
to save all this to a CSV file already handles repeated IDs and puts
the data where it needs to go. I hope I've explained this well enough.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Adding 'where' to query

2016-03-14 Thread Alex Hall
Thanks for the clarification. I'm suddenly getting no results at all
when I add this filter, but at least now I know I'm doing the syntax
right. Never a dull moment. :)

On 3/14/16, Jonathan Vanasco  wrote:
>
>>
>> .filter(t1.c1=='hello', and_(t3.c1=='world'))
>>
>
> The and_ Is wrong in this context.  Everything in `filter` is joined by
> "and" by default.  You just want:
>
> .filter(t1.c1=='hello', t3.c1=='world')
>
> `and_` is usually used in a nested condition, often under an `or_`.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Adding 'where' to query

2016-03-14 Thread Alex Hall
I think I got it. I've been using .filter() for only joins thus far,
so somehow had it in my head that it was only for joining. Of course,
.filter(t1.c1=='hello')
will work. I believe I'm using and_ correctly if I say:
.filter(t1.c1=='hello', and_(t3.c1=='world'))
I may have that and_ part wrong, but filter is the obvious solution to
most of my question.

On 3/14/16, Alex Hall <ah...@autodist.com> wrote:
> Hi all,
> I had a link that was a great intro to querying, but of course, I
> can't find it now. I need to add a couple conditions to my query. In
> SQL, it might look like this:
>
> select *
>  from t1 join t2 on t1.c1==t2.c1
> join t3 on t3.c1==t1.c1
> where t1.c1 = 'hello' and t3.c3 = 'world'
>
> The joins I have, through query.filter(). It's the 'where' at the end
> that I'm not certain about. I know I've read how to do this, but I
> can't find that page anywhere. I also don't want to make it more
> complex than it needs to be. For instance, using "select" and putting
> that back into "query" when I don't need to. I've tried adding this
> after the last call to filter():
> .where(item.itm_webflag != 'N', and_(item.itm_suspflag != 'Y'))\
> But of course, SA says that query has no attribute 'where'.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Adding 'where' to query

2016-03-14 Thread Alex Hall
Hi all,
I had a link that was a great intro to querying, but of course, I
can't find it now. I need to add a couple conditions to my query. In
SQL, it might look like this:

select *
 from t1 join t2 on t1.c1==t2.c1
join t3 on t3.c1==t1.c1
where t1.c1 = 'hello' and t3.c3 = 'world'

The joins I have, through query.filter(). It's the 'where' at the end
that I'm not certain about. I know I've read how to do this, but I
can't find that page anywhere. I also don't want to make it more
complex than it needs to be. For instance, using "select" and putting
that back into "query" when I don't need to. I've tried adding this
after the last call to filter():
.where(item.itm_webflag != 'N', and_(item.itm_suspflag != 'Y'))\
But of course, SA says that query has no attribute 'where'.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
That worked! Thank you so much for your patience. Part of it was the
code, and part of it turned out to be that I was still using
vendorTable = base.classes.VENDR

It didn't occur to me that my VENDR class had taken over that part, so
base.classes would no longer contain VENDR. When I saw your asserts,
it struck me that the last piece of the puzzle might be to set
ventorDable = VENDR, and that seems to be doing the job beautifully.
Thanks again!

On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> metadata.reflect(..., extend_existing=True), here's a complete example
>
> from sqlalchemy import create_engine
>
> e = create_engine("mssql+pyodbc://scott:tiger@ms_2008", echo=True)
>
> with e.begin() as conn:
>
>  conn.execute("""
>  if not exists (select * from sysobjects where name='sometable'
> and xtype='U')
>  create table sometable (
>  id integer,
>  data varchar(20),
>  primary key (id)
>  )
>  """)
>
>  conn.execute("""
>  if not exists (select * from sysobjects where
> name='someothertable' and xtype='U')
>  create table someothertable (
>  id integer,
>  data varchar(20),
>  primary key (id)
>  )
>  """)
>
>  conn.execute("""
>  if not exists (select * from sysobjects where name='VENDR' and
> xtype='U')
>  create table [VENDR] (
>  [PVVNNO] integer,
>  [DATA] varchar(20)
>  )
>  """)
>
> from sqlalchemy.ext.automap import automap_base
> from sqlalchemy import MetaData, Column, String
> from sqlalchemy.orm import Session
>
> metadata = MetaData()
>
> desiredTables = ["sometable", "someothertable", "VENDR"]
> base = automap_base(metadata=metadata)
>
>
> class VENDR(base):
>  __tablename__ = "VENDR"
>  PVVNNO = Column(String, primary_key=True)
>
> metadata.reflect(e, only=desiredTables, extend_existing=True)
> assert 'VENDR' in metadata.tables
>
> base.prepare()
>
> assert VENDR.DATA
>
> sess = Session(e)
> print sess.query(VENDR).all()
>
>
>
>
> On 03/14/2016 10:21 AM, Alex Hall wrote:
>> I hate to say it, but... AttributeError: VENDR. I've moved different
>> lines all around, above and below the class definition, but nothing
>> I've tried works. The only change was when I put my declaration of
>> base below the class, and Python naturally said it didn't know what my
>> table class was inheriting from. I don't know why this is being such a
>> problem.
>>
>> On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> oh.  try it like this:
>>>
>>> class VENDR(base):
>>>  __tablename__ = "VENDR"
>>>  PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>>>
>>>  __table_args__ = {"extend_existing": True}
>>>
>>> that tells reflection to add new data to this Table object even though
>>> it already exists.
>>>
>>>
>>> On 03/14/2016 09:24 AM, Alex Hall wrote:
>>>> Thanks for that. Somehow, I'm getting the same error as before--the
>>>> VENDR table isn't being reflected. Here's the entire snippet, from
>>>> engine to trying to get the table.
>>>>
>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>> %(username, password, dsn))
>>>> session = Session(engine)
>>>> metadata = sqlalchemy.MetaData()
>>>> desiredTables = ["item", "assignment", "attachment", "attach_text",
>>>> "attribute", "attributevalue", "VENDR", "attributevalueassign"]
>>>> base = automap_base(metadata=metadata)
>>>> #pause here to make a table, since VENDR lacks a PK
>>>> class VENDR(base):
>>>>__tablename__ = "VENDR"
>>>>PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>>>> #done. Anyway...
>>>> metadata.reflect(engine, only=desiredTables)
>>>> base.prepare()
>>>>
>>>> itemTable = base.classes.item
>>>> assignmentTable = base.classes.assignment
>>>> attachmentTable = base.classes.attachment
>>>> attachmentTextTable = base.classes.attach_text
>>>> attributeTable = base.classes.attribute
>>>> attributeValueTable = base.classes.attributevalue
>>>&g

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
I hate to say it, but... AttributeError: VENDR. I've moved different
lines all around, above and below the class definition, but nothing
I've tried works. The only change was when I put my declaration of
base below the class, and Python naturally said it didn't know what my
table class was inheriting from. I don't know why this is being such a
problem.

On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> oh.  try it like this:
>
> class VENDR(base):
> __tablename__ = "VENDR"
> PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>
> __table_args__ = {"extend_existing": True}
>
> that tells reflection to add new data to this Table object even though
> it already exists.
>
>
> On 03/14/2016 09:24 AM, Alex Hall wrote:
>> Thanks for that. Somehow, I'm getting the same error as before--the
>> VENDR table isn't being reflected. Here's the entire snippet, from
>> engine to trying to get the table.
>>
>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>> %(username, password, dsn))
>> session = Session(engine)
>> metadata = sqlalchemy.MetaData()
>> desiredTables = ["item", "assignment", "attachment", "attach_text",
>> "attribute", "attributevalue", "VENDR", "attributevalueassign"]
>> base = automap_base(metadata=metadata)
>> #pause here to make a table, since VENDR lacks a PK
>> class VENDR(base):
>>   __tablename__ = "VENDR"
>>   PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>> #done. Anyway...
>> metadata.reflect(engine, only=desiredTables)
>> base.prepare()
>>
>> itemTable = base.classes.item
>> assignmentTable = base.classes.assignment
>> attachmentTable = base.classes.attachment
>> attachmentTextTable = base.classes.attach_text
>> attributeTable = base.classes.attribute
>> attributeValueTable = base.classes.attributevalue
>> attributeValueAssignmentTable = base.classes.attributevalueassign
>> vendorTable = base.classes.VENDR #AttributeError: VENDR
>>
>> I still don't quite see how base, metadata, and session all interact
>> to do what SA does, or I'd have a much easier time troubleshooting
>> this. I'm sure I just have something out of order, or some other
>> simple mistake.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> like this:
>>>
>>> class VENDR(MyAutomapBase):
>>>   __tablename__ = 'VENDR'
>>>
>>>   id = Column(Integer, primary_key=True)
>>>
>>> Above, the 'id' column name should match the column in the table that
>>> you'd like to consider as the primary key (and so should the type) - the
>>> "id" / "Integer" combination above is just an example.
>>>
>>> Then do the automap as you've done.   At the end, if it worked,
>>> Base.classes.VENDR should be the same class as the VENDR class above.
>>>
>>>
>>> On 03/11/2016 05:09 PM, Alex Hall wrote:
>>>> Sorry, do you mean the base subclass, or a new table class? In either
>>>> case, I'm not sure I see how this will fit into my automapping code. I
>>>> know this is all fairly basic, I just can't quite picture what goes
>>>> where and what inherits from/gets passed to what to make it automap
>>>> this VENDR table. If I could, I'd just add a PK column to the table
>>>> itself. Sadly, I can't change that kind of thing, only query it.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>> just make the class and include the PK column, then automap.  the rest
>>>>> of the columns should be filled in.
>>>>>
>>>>>
>>>>> On 03/11/2016 04:14 PM, Alex Hall wrote:
>>>>>> Ah, you're right. Every other table I've used in this database has
>>>>>> had
>>>>>> a key, and I didn't even notice that this VENDR table lacks one. That
>>>>>> explains the mystery! Thanks.
>>>>>>
>>>>>> Now to map this table. I've read the section of the docs on doing
>>>>>> this, and I get that I subclass base, set __table__ to be my VENDR
>>>>>> table, then set the key in my subclass. My question is how I access
>>>>>> the table, given that I can't automap it first. That is, if I can't
>>>>>> map the table because it has no PK, to what do I set __table__ in the
>>>>>> subclass that will let m

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
Thanks for that. Somehow, I'm getting the same error as before--the
VENDR table isn't being reflected. Here's the entire snippet, from
engine to trying to get the table.

engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
%(username, password, dsn))
session = Session(engine)
metadata = sqlalchemy.MetaData()
desiredTables = ["item", "assignment", "attachment", "attach_text",
"attribute", "attributevalue", "VENDR", "attributevalueassign"]
base = automap_base(metadata=metadata)
#pause here to make a table, since VENDR lacks a PK
class VENDR(base):
 __tablename__ = "VENDR"
 PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
#done. Anyway...
metadata.reflect(engine, only=desiredTables)
base.prepare()

itemTable = base.classes.item
assignmentTable = base.classes.assignment
attachmentTable = base.classes.attachment
attachmentTextTable = base.classes.attach_text
attributeTable = base.classes.attribute
attributeValueTable = base.classes.attributevalue
attributeValueAssignmentTable = base.classes.attributevalueassign
vendorTable = base.classes.VENDR #AttributeError: VENDR

I still don't quite see how base, metadata, and session all interact
to do what SA does, or I'd have a much easier time troubleshooting
this. I'm sure I just have something out of order, or some other
simple mistake.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> like this:
>
> class VENDR(MyAutomapBase):
>  __tablename__ = 'VENDR'
>
>  id = Column(Integer, primary_key=True)
>
> Above, the 'id' column name should match the column in the table that
> you'd like to consider as the primary key (and so should the type) - the
> "id" / "Integer" combination above is just an example.
>
> Then do the automap as you've done.   At the end, if it worked,
> Base.classes.VENDR should be the same class as the VENDR class above.
>
>
> On 03/11/2016 05:09 PM, Alex Hall wrote:
>> Sorry, do you mean the base subclass, or a new table class? In either
>> case, I'm not sure I see how this will fit into my automapping code. I
>> know this is all fairly basic, I just can't quite picture what goes
>> where and what inherits from/gets passed to what to make it automap
>> this VENDR table. If I could, I'd just add a PK column to the table
>> itself. Sadly, I can't change that kind of thing, only query it.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> just make the class and include the PK column, then automap.  the rest
>>> of the columns should be filled in.
>>>
>>>
>>> On 03/11/2016 04:14 PM, Alex Hall wrote:
>>>> Ah, you're right. Every other table I've used in this database has had
>>>> a key, and I didn't even notice that this VENDR table lacks one. That
>>>> explains the mystery! Thanks.
>>>>
>>>> Now to map this table. I've read the section of the docs on doing
>>>> this, and I get that I subclass base, set __table__ to be my VENDR
>>>> table, then set the key in my subclass. My question is how I access
>>>> the table, given that I can't automap it first. That is, if I can't
>>>> map the table because it has no PK, to what do I set __table__ in the
>>>> subclass that will let me map the table?
>>>>
>>>> One post I found suggested something like this:
>>>>
>>>> vendorTable = Table("VENDR", metadata, column("PVVNNO",
>>>> primary_key=True))
>>>>
>>>> I'm guessing I'd have to add the column definitions for the other
>>>> columns if I did that. I'm further guessing that this replaces the
>>>> docs' method of subclassing, since the PK is now set. However, I don't
>>>> know if this would still work with automapping.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>> ah.  does VENDR have a primary key?   it won't be mapped if not.
>>>>>
>>>>> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 03/11/2016 12:47 PM, Alex Hall wrote:
>>>>>> VENDR is right there, in base.classes and metadata.tables. Yet,
>>>>>> vendorTable = base.classes.VENDR
>>>>>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>>>>>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>>>>>> the name before I try to access it, if t

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Sorry, do you mean the base subclass, or a new table class? In either
case, I'm not sure I see how this will fit into my automapping code. I
know this is all fairly basic, I just can't quite picture what goes
where and what inherits from/gets passed to what to make it automap
this VENDR table. If I could, I'd just add a PK column to the table
itself. Sadly, I can't change that kind of thing, only query it.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> just make the class and include the PK column, then automap.  the rest
> of the columns should be filled in.
>
>
> On 03/11/2016 04:14 PM, Alex Hall wrote:
>> Ah, you're right. Every other table I've used in this database has had
>> a key, and I didn't even notice that this VENDR table lacks one. That
>> explains the mystery! Thanks.
>>
>> Now to map this table. I've read the section of the docs on doing
>> this, and I get that I subclass base, set __table__ to be my VENDR
>> table, then set the key in my subclass. My question is how I access
>> the table, given that I can't automap it first. That is, if I can't
>> map the table because it has no PK, to what do I set __table__ in the
>> subclass that will let me map the table?
>>
>> One post I found suggested something like this:
>>
>> vendorTable = Table("VENDR", metadata, column("PVVNNO",
>> primary_key=True))
>>
>> I'm guessing I'd have to add the column definitions for the other
>> columns if I did that. I'm further guessing that this replaces the
>> docs' method of subclassing, since the PK is now set. However, I don't
>> know if this would still work with automapping.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> ah.  does VENDR have a primary key?   it won't be mapped if not.
>>>
>>> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 03/11/2016 12:47 PM, Alex Hall wrote:
>>>> VENDR is right there, in base.classes and metadata.tables. Yet,
>>>> vendorTable = base.classes.VENDR
>>>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>>>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>>>> the name before I try to access it, if that would help at all? This is
>>>> the only table in the CMS to have a name in all caps, but I need to
>>>> access it to look up manufacturer details for items.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>
>>>>> can you look in metadata.tables to see what it actually reflected ?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 03/11/2016 12:09 PM, Alex Hall wrote:
>>>>>> That's weird: the name I see is exactly what I've been using,
>>>>>> "VENDR".
>>>>>> All caps and everything. I tried using lowercase, just to see what it
>>>>>> would do, but it failed.
>>>>>>
>>>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>>>>>> Hello list,
>>>>>>>> Finally, a pure SA question from me. I'm using Automap and the
>>>>>>>> "only"
>>>>>>>> keyword to automap a subset of the tables in our CMS database. This
>>>>>>>> has worked perfectly thus far. Now, though, it's failing on a
>>>>>>>> specific
>>>>>>>> table, and the only difference I can see is that this table's name
>>>>>>>> is
>>>>>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>>>>>> shouldn't matter, right?
>>>>>>>
>>>>>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>>>>>> used.   How to handle this depends on the exact name that's in the
>>>>>>> database and if it truly does not match case-insensitively.
>>>>>>>
>>>>>>> Examine the output of:
>>>>>>>
>>>>>>> inspect(engine).get_table_names()
>>>>>>>
>>>>>>> find your table, and that's the name you should use.
>>>>>>>
>>>>

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Ah, you're right. Every other table I've used in this database has had
a key, and I didn't even notice that this VENDR table lacks one. That
explains the mystery! Thanks.

Now to map this table. I've read the section of the docs on doing
this, and I get that I subclass base, set __table__ to be my VENDR
table, then set the key in my subclass. My question is how I access
the table, given that I can't automap it first. That is, if I can't
map the table because it has no PK, to what do I set __table__ in the
subclass that will let me map the table?

One post I found suggested something like this:

vendorTable = Table("VENDR", metadata, column("PVVNNO", primary_key=True))

I'm guessing I'd have to add the column definitions for the other
columns if I did that. I'm further guessing that this replaces the
docs' method of subclassing, since the PK is now set. However, I don't
know if this would still work with automapping.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> ah.  does VENDR have a primary key?   it won't be mapped if not.
>
> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>
>
>
>
>
>
> On 03/11/2016 12:47 PM, Alex Hall wrote:
>> VENDR is right there, in base.classes and metadata.tables. Yet,
>> vendorTable = base.classes.VENDR
>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>> the name before I try to access it, if that would help at all? This is
>> the only table in the CMS to have a name in all caps, but I need to
>> access it to look up manufacturer details for items.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>
>>> can you look in metadata.tables to see what it actually reflected ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 03/11/2016 12:09 PM, Alex Hall wrote:
>>>> That's weird: the name I see is exactly what I've been using, "VENDR".
>>>> All caps and everything. I tried using lowercase, just to see what it
>>>> would do, but it failed.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>
>>>>>
>>>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>>>> Hello list,
>>>>>> Finally, a pure SA question from me. I'm using Automap and the "only"
>>>>>> keyword to automap a subset of the tables in our CMS database. This
>>>>>> has worked perfectly thus far. Now, though, it's failing on a
>>>>>> specific
>>>>>> table, and the only difference I can see is that this table's name is
>>>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>>>> shouldn't matter, right?
>>>>>
>>>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>>>> used.   How to handle this depends on the exact name that's in the
>>>>> database and if it truly does not match case-insensitively.
>>>>>
>>>>> Examine the output of:
>>>>>
>>>>> inspect(engine).get_table_names()
>>>>>
>>>>> find your table, and that's the name you should use.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Stranger still, the actual reflection doesn't
>>>>>> error out. Later, where I try to assign base.classes.MYTABLE to a
>>>>>> variable, is where I get an AttributeError. Here's my code:
>>>>>>
>>>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>>>> %(username, password, dsn))
>>>>>> base = automap_base()
>>>>>> session = Session(engine)
>>>>>> metadata = sqlalchemy.MetaData()
>>>>>> desiredTables = ["table", "othertable", "VENDR"]
>>>>>> metadata.reflect(engine, only=desiredTables) #works fine
>>>>>>
>>>>>> table = base.classes.table #fine
>>>>>> table2 = base.classes.othertable #fine
>>>>>> vendorTable = base.classes.VENDR #AttributeError
>>>>>>
>>>>>> I've added and removed tables as I adjust this script, and all of
>>>>>> them
>>>>>> work perfectly. This VENDR table is the first one in two days to
>>>>>> cause
>>>>>> problems. If I iterate over all the classes in base.classes and print
>>>&

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
VENDR is right there, in base.classes and metadata.tables. Yet,
vendorTable = base.classes.VENDR
raises an AttributeError. Odd! There's nothing cap-sensitive about
__hasattr__ that I'm forgetting, is there? Or, could I somehow alias
the name before I try to access it, if that would help at all? This is
the only table in the CMS to have a name in all caps, but I need to
access it to look up manufacturer details for items.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
> can you look in metadata.tables to see what it actually reflected ?
>
>
>
>
>
>
> On 03/11/2016 12:09 PM, Alex Hall wrote:
>> That's weird: the name I see is exactly what I've been using, "VENDR".
>> All caps and everything. I tried using lowercase, just to see what it
>> would do, but it failed.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>
>>>
>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>> Hello list,
>>>> Finally, a pure SA question from me. I'm using Automap and the "only"
>>>> keyword to automap a subset of the tables in our CMS database. This
>>>> has worked perfectly thus far. Now, though, it's failing on a specific
>>>> table, and the only difference I can see is that this table's name is
>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>> shouldn't matter, right?
>>>
>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>> used.   How to handle this depends on the exact name that's in the
>>> database and if it truly does not match case-insensitively.
>>>
>>> Examine the output of:
>>>
>>> inspect(engine).get_table_names()
>>>
>>> find your table, and that's the name you should use.
>>>
>>>
>>>
>>>
>>> Stranger still, the actual reflection doesn't
>>>> error out. Later, where I try to assign base.classes.MYTABLE to a
>>>> variable, is where I get an AttributeError. Here's my code:
>>>>
>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>> %(username, password, dsn))
>>>> base = automap_base()
>>>> session = Session(engine)
>>>> metadata = sqlalchemy.MetaData()
>>>> desiredTables = ["table", "othertable", "VENDR"]
>>>> metadata.reflect(engine, only=desiredTables) #works fine
>>>>
>>>> table = base.classes.table #fine
>>>> table2 = base.classes.othertable #fine
>>>> vendorTable = base.classes.VENDR #AttributeError
>>>>
>>>> I've added and removed tables as I adjust this script, and all of them
>>>> work perfectly. This VENDR table is the first one in two days to cause
>>>> problems. If I iterate over all the classes in base.classes and print
>>>> each one, I don't even see it in that list, so SA isn't simply
>>>> transforming the name. This is probably a simple thing, but I don't
>>>> see the problem. Thanks for any suggestions.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
That's weird: the name I see is exactly what I've been using, "VENDR".
All caps and everything. I tried using lowercase, just to see what it
would do, but it failed.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/11/2016 09:39 AM, Alex Hall wrote:
>> Hello list,
>> Finally, a pure SA question from me. I'm using Automap and the "only"
>> keyword to automap a subset of the tables in our CMS database. This
>> has worked perfectly thus far. Now, though, it's failing on a specific
>> table, and the only difference I can see is that this table's name is
>> in all caps, whereas the rest are all lowercase. Capitalization
>> shouldn't matter, right?
>
> it does, as ALLCAPS is case sensitive and indicates quoting will be
> used.   How to handle this depends on the exact name that's in the
> database and if it truly does not match case-insensitively.
>
> Examine the output of:
>
> inspect(engine).get_table_names()
>
> find your table, and that's the name you should use.
>
>
>
>
> Stranger still, the actual reflection doesn't
>> error out. Later, where I try to assign base.classes.MYTABLE to a
>> variable, is where I get an AttributeError. Here's my code:
>>
>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>> %(username, password, dsn))
>> base = automap_base()
>> session = Session(engine)
>> metadata = sqlalchemy.MetaData()
>> desiredTables = ["table", "othertable", "VENDR"]
>> metadata.reflect(engine, only=desiredTables) #works fine
>>
>> table = base.classes.table #fine
>> table2 = base.classes.othertable #fine
>> vendorTable = base.classes.VENDR #AttributeError
>>
>> I've added and removed tables as I adjust this script, and all of them
>> work perfectly. This VENDR table is the first one in two days to cause
>> problems. If I iterate over all the classes in base.classes and print
>> each one, I don't even see it in that list, so SA isn't simply
>> transforming the name. This is probably a simple thing, but I don't
>> see the problem. Thanks for any suggestions.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Hello list,
Finally, a pure SA question from me. I'm using Automap and the "only"
keyword to automap a subset of the tables in our CMS database. This
has worked perfectly thus far. Now, though, it's failing on a specific
table, and the only difference I can see is that this table's name is
in all caps, whereas the rest are all lowercase. Capitalization
shouldn't matter, right? Stranger still, the actual reflection doesn't
error out. Later, where I try to assign base.classes.MYTABLE to a
variable, is where I get an AttributeError. Here's my code:

engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
%(username, password, dsn))
base = automap_base()
session = Session(engine)
metadata = sqlalchemy.MetaData()
desiredTables = ["table", "othertable", "VENDR"]
metadata.reflect(engine, only=desiredTables) #works fine

table = base.classes.table #fine
table2 = base.classes.othertable #fine
vendorTable = base.classes.VENDR #AttributeError

I've added and removed tables as I adjust this script, and all of them
work perfectly. This VENDR table is the first one in two days to cause
problems. If I iterate over all the classes in base.classes and print
each one, I don't even see it in that list, so SA isn't simply
transforming the name. This is probably a simple thing, but I don't
see the problem. Thanks for any suggestions.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Ways of processing multiple rows with same ID?

2016-03-10 Thread Alex Hall
What I'm doing, and sorry for not explaining further, is making a CSV
file of data. Each row is a row in my results, or would be if I were
just selecting from products. Having to select from attributes as well
is where I'm having problems. Each product can have multiple
attributes, and each attribute value can be assigned to multiple
products. Joining everything (by using filter()) is giving me way too
many results to deal with effectively.

Say we had a product with an ID of 001, a name of "widget", and an
image of "images/001.jpg". This product has a weight and a color, but
those attributes are in the attributevalues table.

Attributes work something like this: attributeassignments has a
product ID and an attribute ID. Attributes has an attribute ID and an
attribute name ("color", "size", and so on). AttributeValues has an
attribute ID and a value ("blue", "55", etc).

For our widget, attributes might have 001, 001, "size"; 001, 002,
"color"; 001, 003, "weight". 001 is the product ID of the widget.
AttributeAssignment might have 001, 001; 001, 002; 001, 003. 001 is
the widget, and the second numbers are the IDs of the different
attributes.
AttributeValues might have 001, "1x2x3"; 002, "blue"; 003, "55".

In the CSV file, I want to put each of those three attributes under a column:
001, Widget, images/001.jpg, blue, 55, 1x2x3

Currently, I'm iterating over the results. The first line inside the
loop, I check the current result's ID and compare it to the previous
one. If they match, I assume I'm on the same result, so I get the
values of the attributes in the row. If the two IDs differ, I assume
I'm done. I write out the values for the last result, clear out the
array I use to store all the values, and grab the new values that
aren't attributes.

My current query does so much joining that my results are too large to
manage, though. The very first iteration works perfectly, but then I
get stuck with the same product ID number. Even when I raise the query
limit to 100,000, I never see any other product ID than that first
one. It feels like an infinite loop, but my loop is simply,
items = 
for result in items:
 ...

I hope this makes more sense. I've re-read the ORM tutorial as
Jonathan suggested, too. The last bit, about many-to-many
relationships, seems like it might be useful. I don't quite follow it
all, but hopefully some of it will make more sense the more I re-read
it.

On 3/10/16, Jonathan Vanasco  wrote:
> 2 comments:
>
> 1.  Go through the SqlAlchemy ORM tutorial.  What you're describing right
> now is dancing around some very basic relationship definition and
> loading/query techniques.
>
> 2. You haven't described what you actually want to accomplish, only some
> general ideas of how you think you could interact with data.  If you want
> to grab a matrix that has every product, there will be one set of ways to
> approach the problem.  If you want to return an iterable list of all
> variations of a certain product, there will be another set of ways.  If you
>
> want to search by specific criteria, there will be yet another set of ways.
>
>  depending on what you do with the data, the query will be different.
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Ways of processing multiple rows with same ID?

2016-03-10 Thread Alex Hall
Hi list,
I'm not sure how to explain this, so let me know if I lose you. I have
the same products database as yesterday, but I've just learned that
product attributes are stored in their own tables. A product can have
many attributes (size, color, weight, etc), and each attribute value
is in a table. That table is tied to the product through an attribute
assignment table, which lets us write "large" once and then assign
that to thousands of products at once, for instance. Essentially, the
item_id is a foreign key into attributeAssignment, which also has an
attr_value_id. That attr_value_id matches the PK in attr_values, which
is the table that actually holds the attribute text.

The problem is that, when I use filter() to join all this stuff
together, I get valueCount*productCount rows. That's not really a
problem, actually, as it's doing what I want. Putting things back
together is going to be a challenge, though. I essentially want, for
example, color and size under the same product ID, but my current
query will return two different rows with the same ID. One row will
have the color, and the next row will have the size. I don't think I
can flatten these out, so my next idea is doing post-query processing
as I iterate through the results.

I'm tempted to just hard-code a sub-loop, to iterate through each n
rows, knowing that n will be the number of rows that share an ID.
Using grouping should make that work. My fear is that I'll get a set
of data which, somehow, has a different size--maybe a missing
attribute--and thus my entire system will be off a row or two. My next
idea is to store the ID of the first row inside the for loop iterating
through all the rows. In that for loop is a while loop: while
IDOfNextRow==currentID: (check IDOfNewRow). That way, I can keep
related rows together and manually pull out the data I need for each
one. Using group-by, I shouldn't ever have a case where a used ID
surfaces again way down the line.

Is there an easier way I haven't thought of? I can't be the first
person to run into this, and SA has a lot of powerful features that
make doing DB work easy, so maybe there's something I just don't know
about. As I said, I'm not so new to SQL that I just started last week,
but neither am I any kind of experienced user at all; maybe SQL itself
can offer something here. Thanks for any information/ideas anyone has,
and again, let me know if I haven't explained this well enough.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: properties of query results if names overlap?

2016-03-09 Thread Alex Hall
I think I answered my own question: the result variable gets
properties named for the column names, as usual, but those properties
are each under their respective table names. Those table names come
from the actual table name (I'm using auto-map) or, presumably, the
__tablename__ variable for declarative bases. That is:

for result in results:
 print result.items.itm_id
 print result.assignments.itm_id

At least, this is working for now. Please let me know if I'm missing a
piece or need to know more before some future bit of code turns out to
hide a gotcha I don't yet know about. Thanks.

On 3/9/16, Alex Hall <ah...@autodist.com> wrote:
> Hi all,
> Just a quick question: what does SA do if names overlap? For example,
> in assignmentTable, there's a column called itm_id. In
> attachmentTable, there's also a column called itm_id, and there's one
> in itemTable as well. If I combine these in a kind of join, as in:
>
> results = session.query(assignmentTable, attachmentTable)\
>  .filter(assignmentTable.itm_id == attachmentTable.itm_id)\
>  .all()
>
> for result in results:
>  print result.itm_id
>
> What will that print? I love SA's named properties for query results,
> and would much rather use them than indexes if possible. Is this just
> not allowed for name overlaps, or does SA do some trick with the table
> names to allow it? Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
That makes sense. Part of my problem is that, as I've mentioned in the
past, I was recently hired. I didn't set anything up, and I still
don't know for sure what I can trust to be unique, or 6 versus 8
characters, or a lot of other small details. That said, SSMS shows the
item ID as a primary key, which means it is unique. I think I'm safe
to just apply distinct() to my entire query, since there's no way the
ID can ever be repeated. I've been looking at a bunch of tables today,
and I had it in my head that the id in this one was only *part of* the
PK and thus could be duplicated. At least I learned something from all
this. Thanks again for the help, guys.

On 3/9/16, Jonathan Vanasco <jonat...@findmeon.com> wrote:
>
> On Wednesday, March 9, 2016 at 3:02:05 PM UTC-5, Alex Hall wrote:
>>
>> Fair enough, thanks. I didn't realize it was such a complex task; I
>> figured it was just a matter of passing an argument to distinct() or
>> something equally easy.
>>
>
>
> Yeah PostgreSQL is the only db that supports "DISTINCT ON"... but it can be
>
> very awkward.
>
> Let me try to explain this better, because I was you a few years ago -- and
>
> thought / believed the same things.  (and this still annoys me!)
>
> Here's a table to represent an imaginary situation where  `id` is the
> primary key (it is unique) but the other columns aren't.
>
> id | product_id | name
> ===++=
> 1  | 1  | foo
> 2  | 1  | bar
> 3  | 2  | biz
> 4  | 2  | bang
> 5  | 3  | foo
> 6  | 1  | bar
>
> The distinct column values are:
>
> id - 1,2,3,4,5
> product_id - 1, 2, 3
> name - foo, bar, biz, bang
>
> If you want to get distinct data from the table though, you need to think
> in rows.  (unless you're querying for column data)
>
> If you want "distinct" rows based on the product id, how should these 3
> rows be handled?
>
> 1  | 1  | foo
> 2  | 1  | bar
> 6  | 1  | bar
>
> They all have 1 for the product_id.
>
> The rows are all distinct if we think of the primary id key being an
> attribute.
> If we limit the distinction to the product_id and the name, we can drop the
>
> 3 down to 2 combinations:
>
> 1  | foo
> 1  | bar
>
> But this probably won't work for your needs.
>
> The (1, foo) row corresponds to id 1;
> but the (1, bar) row could correspond to (2,1,bar) or (6,1,bar) rows in the
>
> table.
>
> So when you say only want rows "where the item number is distinct.", you
> should try asking "What should I do with rows where the item_number isn't
> distinct?"
>
> That should raise some red flags for you, and help you realize that you
> probably don't really want rows where the item number is distinct.You
> probably want to do some other query and approach some other goal.
>
> "DISTINCT" is (usually) a really complex situation because people often
> think it will do one thing, but it does something very different... and to
> accomplish the task they want, it's a totally different query.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] properties of query results if names overlap?

2016-03-09 Thread Alex Hall
Hi all,
Just a quick question: what does SA do if names overlap? For example,
in assignmentTable, there's a column called itm_id. In
attachmentTable, there's also a column called itm_id, and there's one
in itemTable as well. If I combine these in a kind of join, as in:

results = session.query(assignmentTable, attachmentTable)\
 .filter(assignmentTable.itm_id == attachmentTable.itm_id)\
 .all()

for result in results:
 print result.itm_id

What will that print? I love SA's named properties for query results,
and would much rather use them than indexes if possible. Is this just
not allowed for name overlaps, or does SA do some trick with the table
names to allow it? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
Fair enough, thanks. I didn't realize it was such a complex task; I
figured it was just a matter of passing an argument to distinct() or
something equally easy. Speed isn't a huge concern, so I suppose I
could get around this by storing the item numbers I find and then
checking that the row I'm about to use doesn't have a number in that
set. Still, there could be hundreds of thousands of items, so that
might not be the best plan. Anyway, I'll look into it more.

On 3/9/16, Jonathan Vanasco  wrote:
> It would probably be best for you to figure out the correct raw sql you
> want, then convert it to SqlAlchemy.
>
> Postgres is the only DB I know of that offers "DISTINCT ON (columns)" --
> and even that works a bit awkward.
>
> The query that you want to do isn't actually simple -- there are concerns
> with how to handle duplicate rows (based on the distinct field).Often
> people will use "GROUP BY" + "ORDER BY" along with distincts, subselects
> and misc database functions.
>
> If I were in your place, I would read through some DB tutorials and
> StackOverflow questions on how people are dealing with similar problems.
>  That should help you learn.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
Hi all,
I want to select * from a table, getting all columns. However, the
only rows I want are where the item number is distinct. I've got:
items = session.query(itemTable)\
.distinct()\
.limit(10)
But that doesn't apply "distinct" to just item_number. I'm not the
best with SQL in general or I'd express the query I want so you could
see it. Hopefully my explanation is clear enough.

After my fighting with the iSeries here at work a few weeks ago, I set
up SA to access a Microsoft SQL database yesterday... In about two
hours. That includes setting up the DSN, getting the database name
wrong, getting the credentials wrong, and other non-SA problems. With
all that, SA itself was a breeze. It's amazing what happens when you
don't try to use IBM machines in the mix. :)

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Bulk Insert Broken for Polymorphism?

2016-02-29 Thread Alex Hewson
Hi Mike,

Thanks for the quick response.  If that's the intended behaviour I'll go 
back to non-bulk inserts for my inherited types.  Doubtless I could work 
around it by inserting N new Entities, fetching their autoincrement ID's 
then using them to make Child1 and Child2's but I don't trust myself with 
the added complexity.


Cheers,
Alex.



On Monday, February 29, 2016 at 10:38:22 PM UTC, Alex Hewson wrote:
>
> Hello All,
>
> I'm trying to use the new bulk_save_objects() to improve performance on 
> bulk inserts, and have run into a problem.  If bulk_save_objects() is used 
> to save objects of a polymorphic class..
>
>1. They are created correctly in the DB, with polymorphic type column 
>populated correctly
>2. BUT queries for the new objects will return one of incorrect type.  
>In my case I'm getting instances of Child1 back when I would expect to get 
>a Child2.
>
> The following code demonstrates the problem:
>
> #!/usr/bin/env python3
> # -*- coding: utf-8 -*-
>
> from sqlalchemy import create_engine
> from sqlalchemy import Column, Integer, SmallInteger, String, ForeignKey
> from sqlalchemy.orm import sessionmaker
> from sqlalchemy.ext.declarative import declarative_base
>
> Base = declarative_base()
>
> class Entity(Base):
>   __tablename__ = 'Entity'
>   Id  = Column(Integer, primary_key=True, nullable=False)
>   Content = Column(String)
>   _polytype   = Column(SmallInteger, nullable=False)
>
>   __mapper_args__ = {
> 'polymorphic_identity':1,
> 'polymorphic_on':_polytype
>   }
>
> class Child1(Entity):
>   __tablename__   = 'Child1'
>   MyId= Column(ForeignKey("Entity.Id"), primary_key=True)
>   __mapper_args__ = {'polymorphic_identity':11}
>
> class Child2(Entity):
>   __tablename__   = 'Child2'
>   MyId= Column(ForeignKey("Entity.Id"), primary_key=True)
>   __mapper_args__ = {'polymorphic_identity':12}
>
>
> if __name__ == '__main__':
>   # engine = create_engine('sqlite:///:memory:', echo=False)
>   engine = create_engine('sqlite:///test.db', echo=False)
>   Session = sessionmaker(bind=engine)
>   sess = Session()
>   Base.metadata.create_all(engine)
>   c1_many = [Child1(Content="c1inst_%d"%i) for i in range(0,1000)]
>   c2_many = [Child2(Content="c2inst_%d"%i) for i in range(0,1000)]
>   sess.bulk_save_objects(c1_many)
>   sess.bulk_save_objects(c2_many)
>   # sess.add_all(c1_many)
>   # sess.add_all(c2_many)
>   sess.flush()
>   sess.commit()
>   for c in sess.query(Child1):
> assert isinstance(c, Child1)
>   for c in sess.query(Child2):
> assert isinstance(c, Child2)
>
>
> All the calls to assert isinstance(c, Child1) complete successfully.  But 
> once we start checking for Child2 - boom, we are still getting back Child1 
> instances.
>
> At first I wondered if I was misunderstanding SA's implementation of 
> polymorphism, so tried inserting rows the traditional way with 
> sess.add_all().  But that works fine so I think I've exposed a bug in the 
> new bulk_save_objects() code.
>
> My environment is Python 3.5.1, SQLAlchemy==1.0.12, SQLite 3.8.10.2 on OSX.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bulk Insert Broken for Polymorphism?

2016-02-29 Thread Alex Hewson
Hello All,

I'm trying to use the new bulk_save_objects() to improve performance on 
bulk inserts, and have run into a problem.  If bulk_save_objects() is used 
to save objects of a polymorphic class..

   1. They are created correctly in the DB, with polymorphic type column 
   populated correctly
   2. BUT queries for the new objects will return one of incorrect type.  
   In my case I'm getting instances of Child1 back when I would expect to get 
   a Child2.
   
The following code demonstrates the problem:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, SmallInteger, String, ForeignKey
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Entity(Base):
  __tablename__ = 'Entity'
  Id  = Column(Integer, primary_key=True, nullable=False)
  Content = Column(String)
  _polytype   = Column(SmallInteger, nullable=False)

  __mapper_args__ = {
'polymorphic_identity':1,
'polymorphic_on':_polytype
  }

class Child1(Entity):
  __tablename__   = 'Child1'
  MyId= Column(ForeignKey("Entity.Id"), primary_key=True)
  __mapper_args__ = {'polymorphic_identity':11}

class Child2(Entity):
  __tablename__   = 'Child2'
  MyId= Column(ForeignKey("Entity.Id"), primary_key=True)
  __mapper_args__ = {'polymorphic_identity':12}


if __name__ == '__main__':
  # engine = create_engine('sqlite:///:memory:', echo=False)
  engine = create_engine('sqlite:///test.db', echo=False)
  Session = sessionmaker(bind=engine)
  sess = Session()
  Base.metadata.create_all(engine)
  c1_many = [Child1(Content="c1inst_%d"%i) for i in range(0,1000)]
  c2_many = [Child2(Content="c2inst_%d"%i) for i in range(0,1000)]
  sess.bulk_save_objects(c1_many)
  sess.bulk_save_objects(c2_many)
  # sess.add_all(c1_many)
  # sess.add_all(c2_many)
  sess.flush()
  sess.commit()
  for c in sess.query(Child1):
assert isinstance(c, Child1)
  for c in sess.query(Child2):
assert isinstance(c, Child2)


All the calls to assert isinstance(c, Child1) complete successfully.  But 
once we start checking for Child2 - boom, we are still getting back Child1 
instances.

At first I wondered if I was misunderstanding SA's implementation of 
polymorphism, so tried inserting rows the traditional way with 
sess.add_all().  But that works fine so I think I've exposed a bug in the 
new bulk_save_objects() code.

My environment is Python 3.5.1, SQLAlchemy==1.0.12, SQLite 3.8.10.2 on OSX.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] MSSQL ProgrammingError with aggregate functions

2016-02-24 Thread Alex Lowe
Thank you! That worked, with one minor modification.

It looks like MSSQL requires an alias for an anonymous table (even if 
you're not joining it with anything). As such, I needed to add .alias() to 
the query.

In the end, it looked like this:

q3 = sqlalchemy.select([partial.label('left_string'), 
different_column]).alias()
q4 = sqlalchemy.select([q3.c.left_string, 
func.sum(q3.c.different_column)]).group_by(q3.c.left_string)
sql_engine.execute(q4).fetchall()

On Wednesday, 24 February 2016 09:26:59 UTC-6, Mike Bayer wrote:
>
>
>
> On 02/24/2016 10:13 AM, Alex Lowe wrote: 
> > Hi there, 
> > 
> > I'm receiving a ProgrammingError with certain types of query to MSSQL 
> > (they seem to work fine when querying SQLite though). Either my 
> > Google-fu is weak or there hasn't been a solution posted publicly, since 
> > the two most useful-looking pages were these two StackOverflow threads 
> > with no useful responses: 
> > 
> http://stackoverflow.com/questions/18307466/group-by-case-in-sql-server-sqlalchemy
>  
> > 
> http://stackoverflow.com/questions/21742713/need-a-query-in-sqlalchemy-with-group-by-case
>  
> > 
> > I'm also very new to SQLAlchemy and have mostly picked it up through a 
> > combination of web searches and following the examples of my coworkers 
> > (who picked it up by doing web searches for what they needed), so advice 
> > on how to make my example code better are welcome. 
>
> I've certainly had to work around this problem but the news that the raw 
> string works is new to me, but I would assume it has something to do 
> with the removal of bound parameters.  ODBC actually has two different 
> execution APIs internally that interpret the given statement 
> differently, one is much more picky about being able to infer the type 
> of bound parameters, so that might be part of what's going on. 
>
> If i recall correctly the workaround is to make a subquery like this: 
>
> SELECT left_1, sum(different_column) FROM 
> ( 
>SELECT left(some_string, ?) AS left_1, different_column 
>FROM [DEV].dbo.[AML_Test] 
> ) GROUP BY left_1 
>
>
> so, paraphrasing 
>
> stmt = select([func.left(table.c.some_string, 5).label('left'), 
> table.c.different_column]) 
>
> stmt = select([stmt.c.left, 
> func.sum(stmt.c.different_column]).group_by(stmt.c.left) 
>
>
>
> > 
> > I've got a table that contains a string column and an integer column, 
> > and I'm trying to group by substrings. In so doing, it brings up an 
> > error message about aggregate functions in the group by clause. 
> > Specifically, if I write this code: 
> > 
> > test_table = sqlalchemy.Table('AML_Test', dev_schema) 
> > some_string = sqlalchemy.Column('some_string', 
> > sqlalchemy.VARCHAR(length=50)) 
> > different_column = sqlalchemy.Column('different_column', 
> sqlalchemy.INT()) 
> > partial = func.left(some_string, 3) 
> > aggregate = func.sum(different_column) 
> > qq = 
> test_table.select().group_by(partial).column(partial).column(aggregate) 
> > 
> > and then run qq.execute(), pyodbc gives me the follow error message: 
> > 
> > ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] 
> > [Microsoft][ODBC SQL Server Driver][SQL Server]Column 
> > 'DEV.dbo.AML_Test.some_string' is invalid in the select list because it 
> > is not contained in either an aggregate function or the GROUP BY clause. 
> > (8120) (SQLExecDirectW)") [SQL: 'SELECT left(some_string, ?) AS left_1, 
> > sum(different_column) AS sum_1 \nFROM [DEV].dbo.[AML_Test] GROUP BY 
> > left(some_string, ?)'] [parameters: (3, 3)] 
> > 
> > 
> > My workaround for the moment is to cast the compiled statement to a 
> > string and execute that string, but it's unclear to me why that would do 
> > anything different (despite the fact that it does). 
> > 
> > 
> > c.session.execute( 
> > str(qq.selectable.compile(compile_kwargs={'literal_binds': True})) 
> > ).fetchall() 
> > 
> > 
> > If anyone can explain to me what I'm doing wrong and how to fix it, I'd 
> > be extremely grateful. 
> > 
> > 
> > Thanks, 
> > 
> > 
> > Alex 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 

[sqlalchemy] MSSQL ProgrammingError with aggregate functions

2016-02-24 Thread Alex Lowe
Hi there,

I'm receiving a ProgrammingError with certain types of query to MSSQL (they 
seem to work fine when querying SQLite though). Either my Google-fu is weak 
or there hasn't been a solution posted publicly, since the two most 
useful-looking pages were these two StackOverflow threads with no useful 
responses:
http://stackoverflow.com/questions/18307466/group-by-case-in-sql-server-sqlalchemy
http://stackoverflow.com/questions/21742713/need-a-query-in-sqlalchemy-with-group-by-case

I'm also very new to SQLAlchemy and have mostly picked it up through a 
combination of web searches and following the examples of my coworkers (who 
picked it up by doing web searches for what they needed), so advice on how 
to make my example code better are welcome.

I've got a table that contains a string column and an integer column, and 
I'm trying to group by substrings. In so doing, it brings up an error 
message about aggregate functions in the group by clause. Specifically, if 
I write this code:

test_table = sqlalchemy.Table('AML_Test', dev_schema)
some_string = sqlalchemy.Column('some_string', 
sqlalchemy.VARCHAR(length=50))
different_column = sqlalchemy.Column('different_column', sqlalchemy.INT())
partial = func.left(some_string, 3)
aggregate = func.sum(different_column)
qq = test_table.select().group_by(partial).column(partial).column(aggregate)

and then run qq.execute(), pyodbc gives me the follow error message:

ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC 
SQL Server Driver][SQL Server]Column 'DEV.dbo.AML_Test.some_string' is invalid 
in the select list because it is not contained in either an aggregate function 
or the GROUP BY clause. (8120) (SQLExecDirectW)") [SQL: 'SELECT 
left(some_string, ?) AS left_1, sum(different_column) AS sum_1 \nFROM 
[DEV].dbo.[AML_Test] GROUP BY left(some_string, ?)'] [parameters: (3, 3)]


My workaround for the moment is to cast the compiled statement to a string and 
execute that string, but it's unclear to me why that would do anything 
different (despite the fact that it does).


c.session.execute(
str(qq.selectable.compile(compile_kwargs={'literal_binds': True}))
).fetchall()


If anyone can explain to me what I'm doing wrong and how to fix it, I'd be 
extremely grateful.


Thanks,


Alex

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Indeed, the ibm_db list told me that testing with pyodbc was limited.
I'd skip pyodbc, but so far, it's the *only* package that is able to
connect me to the server, so I have to use it. Hopefully I can get
more information from ibm_db.

Adding the properties dbms_ver and dbms_name was a good idea. When I
do it in my iSeriesConnect() function (where I connect through
pyodbc), Python says that my connection object has no attribute
dbms_ver. I suppose I could subclass it, but it seems like this can't
be the right way to go--I don't know if the version is used for some
important check way off in the code somewhere, and if guessing values
will thus cause unforseen problems. Still, it's worth a try.

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> I guess that's a symptom of the ibm_db_sa package not being very well
> tested with pyodbc
>
> I'm very confused by pyodbc's version numbers -
> https://pypi.python.org/pypi/pyodbc/3.0.10 suggests that version 3.0.10
> exists and was uploaded on 2015-04-29, but also says that the latest
> version is 2.1.9, which was uploaded on 2015-09-24. I've never used pyodbc
> so don't know what to make of that.
>
> I assume your error is coming from this line:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/base.py#L696
>
> ...which is in the base dialect, rather than the pyodbc-specific bit. Maybe
> you could hack around the problem by setting a "dbms_ver" attribute on the
> pyodbc connection that you are creating in your custom creator function.
> This is where it gets used:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/base.py#L481
>
> Simon
>
> On Fri, Feb 19, 2016 at 5:00 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> That makes more sense, but as soon as I put "+pyodbc" in, I'm back to
>> last week's "pyodbc.Connection object has no attribute dbms_ver"
>> error. Pyodbc seems to be the problem, which is ironic--on its own,
>> pyodbc is the only way I've been able to talk to the server at all.
>> Add it to SA, though, and that attribute error appears.
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > According to
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#registering-new-dialects
>> ,
>> > a dialect registered as "db2.pyodbc" should be specified in the URL as
>> > "db2+pyodbc://". Does that make any difference?
>> >
>> > On Fri, Feb 19, 2016 at 4:20 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> Thanks. I tried both, and triedother variations including or excluding
>> >> the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
>> >> I get:
>> >> sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
>> >> [my connection string]".
>> >>
>> >> If I don't get that, it's because I used a name that complains about
>> >> there being no attribute dbms_ver or server.version, depending on the
>> >> string.
>> >>
>> >> They don't make it easy, do they?
>> >>
>> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> > URI prefixes are defined in the setup.py for the ibm_db_sa package:
>> >> >
>> >> >
>> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>> >> >
>> >> > I would guess that you want to end up with the DB2Dialect_pyodbc
>> class,
>> >> > which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>> >> >
>> >> > Simon
>> >> >
>> >> >
>> >> > On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com>
>> wrote:
>> >> >
>> >> >> Thanks, that looks like what I'm looking for. I assume specifying
>> >> >> "ibm_db_sa://" for the string will let SA use the proper dialect?
>> >> >>
>> >> >> I'm now getting "pyodbc.Connection object has no attribute
>> >> >> server_info", in case anyone happens to know what that's about. I'm
>> >> >> getting nightmarish flashbacks to my "has no attribute" error last
>> >> >> week for the same object. But at least this is a different one;
>> >> >> I'll
>> >> >> count it as a good thing!
>> >> >>
>> >> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> >> > On Fri, Feb 19, 2016 at 2:38 PM,

Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
That makes more sense, but as soon as I put "+pyodbc" in, I'm back to
last week's "pyodbc.Connection object has no attribute dbms_ver"
error. Pyodbc seems to be the problem, which is ironic--on its own,
pyodbc is the only way I've been able to talk to the server at all.
Add it to SA, though, and that attribute error appears.

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> According to
> http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#registering-new-dialects,
> a dialect registered as "db2.pyodbc" should be specified in the URL as
> "db2+pyodbc://". Does that make any difference?
>
> On Fri, Feb 19, 2016 at 4:20 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Thanks. I tried both, and triedother variations including or excluding
>> the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
>> I get:
>> sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
>> [my connection string]".
>>
>> If I don't get that, it's because I used a name that complains about
>> there being no attribute dbms_ver or server.version, depending on the
>> string.
>>
>> They don't make it easy, do they?
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > URI prefixes are defined in the setup.py for the ibm_db_sa package:
>> >
>> > https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>> >
>> > I would guess that you want to end up with the DB2Dialect_pyodbc class,
>> > which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>> >
>> > Simon
>> >
>> >
>> > On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> Thanks, that looks like what I'm looking for. I assume specifying
>> >> "ibm_db_sa://" for the string will let SA use the proper dialect?
>> >>
>> >> I'm now getting "pyodbc.Connection object has no attribute
>> >> server_info", in case anyone happens to know what that's about. I'm
>> >> getting nightmarish flashbacks to my "has no attribute" error last
>> >> week for the same object. But at least this is a different one; I'll
>> >> count it as a good thing!
>> >>
>> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> > On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com>
>> wrote:
>> >> >
>> >> >> As the subject says, I am connected to our iSeries through straight
>> >> >> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> >> >> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> >> >> exact same error I was getting when using ibm_db directly. Using
>> >> >> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> >> >> that's the key.
>> >> >>
>> >> >> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> >> >> dialect, or specify the driver to SA directly? Thanks!
>> >> >>
>> >> >>
>> >> > You can pass a "creator" argument to create_engine if you want to
>> >> > create
>> >> > the connection yourself:
>> >> >
>> >> >
>> >>
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>> >> >
>> >> >
>> >>
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>> >> >
>> >> > Hope that helps,
>> >> >
>> >> > Simon
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "sqlalchemy" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to sqlalchemy+unsubscr...@googlegroups.com.
>> >> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> >> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >>

Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Thanks. I tried both, and triedother variations including or excluding
the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
I get:
sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
[my connection string]".

If I don't get that, it's because I used a name that complains about
there being no attribute dbms_ver or server.version, depending on the
string.

They don't make it easy, do they?

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> URI prefixes are defined in the setup.py for the ibm_db_sa package:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>
> I would guess that you want to end up with the DB2Dialect_pyodbc class,
> which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>
> Simon
>
>
> On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Thanks, that looks like what I'm looking for. I assume specifying
>> "ibm_db_sa://" for the string will let SA use the proper dialect?
>>
>> I'm now getting "pyodbc.Connection object has no attribute
>> server_info", in case anyone happens to know what that's about. I'm
>> getting nightmarish flashbacks to my "has no attribute" error last
>> week for the same object. But at least this is a different one; I'll
>> count it as a good thing!
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> As the subject says, I am connected to our iSeries through straight
>> >> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> >> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> >> exact same error I was getting when using ibm_db directly. Using
>> >> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> >> that's the key.
>> >>
>> >> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> >> dialect, or specify the driver to SA directly? Thanks!
>> >>
>> >>
>> > You can pass a "creator" argument to create_engine if you want to
>> > create
>> > the connection yourself:
>> >
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>> >
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>> >
>> > Hope that helps,
>> >
>> > Simon
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+unsubscr...@googlegroups.com.
>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>> >
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Thanks, that looks like what I'm looking for. I assume specifying
"ibm_db_sa://" for the string will let SA use the proper dialect?

I'm now getting "pyodbc.Connection object has no attribute
server_info", in case anyone happens to know what that's about. I'm
getting nightmarish flashbacks to my "has no attribute" error last
week for the same object. But at least this is a different one; I'll
count it as a good thing!

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> As the subject says, I am connected to our iSeries through straight
>> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> exact same error I was getting when using ibm_db directly. Using
>> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> that's the key.
>>
>> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> dialect, or specify the driver to SA directly? Thanks!
>>
>>
> You can pass a "creator" argument to create_engine if you want to create
> the connection yourself:
>
> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>
> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>
> Hope that helps,
>
> Simon
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
As the subject says, I am connected to our iSeries through straight
pyodbc. That seems to run perfectly. Now, is there a way to use SA
with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
exact same error I was getting when using ibm_db directly. Using
pyodbc, I can specify the driver to be used, and I'm pretty sure
that's the key.

Can I either use my pyodbc connection with SA and ibm_db_sa for the
dialect, or specify the driver to SA directly? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] OT: basic ibm_db script hangs while connecting (wasreflection taking a very long time?)

2016-02-17 Thread Alex Hall
I did as suggested and cut my script back. It still hangs while trying
to connect, or errors out if I try different ports (no surprise
there). Below I'll paste the message I just sent to the ibm_db email
list. At least, if this is identical to what ibm_db_sa is doing, it
means that fixing it here should let SA suddenly start working
properly.

I've cut back to a very, very basic script, using only ibm_db and
nothing else. I'm running into exactly what I did when trying to use
sqlalchemy: one of two errors, or an endless waiting period as I wait
in vain for an answer from the server or a timeout. My script:

import ibm_db
dbConnection = 
ibm_db.pconnect("DATABASE=myLibraryName;HOSTNAME=1.2.3.4;PORT="+port+";PROTOCOL=TCPIP;UID=username;PWD=password",
"", "")
print ibm_db.conn_errormsg()

I got the connection string from
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.swg.im.dbclient.python.doc/doc/t0054368.html

I made the port number a variable because it is the thing that keeps
giving me different results. The docs say to use 8471 or 9471 for
database access (the latter for SSL), but those are when I get no
response whatsoever. I've also tried 5, 6, and 446, all of
which return errors immediately. The high numbers give me
SQLCode-30081, and 446 gives me -30020. I even tried a DSN, but I got
an error claiming it couldn't locate the specified DSN even though
said DSN is right in the list when I open up ODBC Manager.

The thing is, we have at least five computers that talk to this 400
for hours every day, so I know it can accept incoming connections. The
computer on which I'm running this stuff can even do it, using the
same software the other stations use, so I know my machine has the
right drivers. Is there anything else I could try? I don't know much
about the 400 itself, and it definitely works with all our current
stations with no problems at all. That said, is there something on it
that I should check? Anything anyone can think of will help. Thanks.

On 2/17/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Tue, Feb 16, 2016 at 04:02:08PM -0500, Alex Hall wrote:
>> Great; I was hoping you wouldn't say that. :) I've been through them
>> many, many times, trying to get the connection working. I've gone from
>> error to error, and thought I had it all working when I finally got
>> the create_engine line to run with no problem. Apparently I'm not as
>> far along as I thought I was. Back to the drawing board.
>
> Hi Alex,
>
> I just want to reiterate my earlier suggestion – before you try to use
> any SQLAlchemy machinery at all, first try to create a connection from
> your Python runtime directly, using whichever DBAPI driver you want to
> use (most likely you want to create a ibm_db connection object -- do
> not import anything related to SQLAlchemy at this point, neither
> sqlalchemy, nor ibm_db_sa), make sure you are able to execute SQL
> statements using that, and only once you get this to work correctly,
> try to figure out how to make it work with SQLAlchemy.
>
> And, of course, you shouldn't try to get SQLAlchemy to work all at
> once either. First, create an Engine with a connection string, but do
> not try to run any fancy introspection or anything before you make
> sure that you can execute raw SQL queries using that engine. After you
> get *that* out of the way, you can start trying out more advanced
> features of SQLAlchemy.
>
> Baby steps, you know. Divide and conquer. Do not try to solve this
> entire huge problem all at once. (And yes, as you are probably aware
> by now, successfully connecting to an enterprise database server *is*
> a huge problem.) That way you'll avoid false leads like this one.
>
> Good luck!
>
> Michal
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
Great; I was hoping you wouldn't say that. :) I've been through them
many, many times, trying to get the connection working. I've gone from
error to error, and thought I had it all working when I finally got
the create_engine line to run with no problem. Apparently I'm not as
far along as I thought I was. Back to the drawing board.

To keep things on topic for this thread, let me pose a general
question. This database contains hundreds of tables, maybe thousands.
Some are small, a few have thousands or millions of rows. Would
automap choke on all that, or could it handle it? Will mapping all
that fill up my ram, or have any other impact I should consider?

On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> well then you're just not making any database connection.   you'd need
> to check your database connectivity and your connection parameters.
>
>
>
> On 02/16/2016 03:37 PM, Alex Hall wrote:
>> I tried that, hoping for a bit more insight into the problem. However,
>> unless I'm doing something wrong, I don't even get any queries. I get
>> my own print statements, then the script tries to connect and hangs.
>> I've added
>> dbEngine.connect()
>> just to be sure the problem is that first connection, and sure enough,
>> it hangs on that line.
>>
>> On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> turning on echo=True inside create_engine() will show you what queries
>>> are emitted as they occur so you can see which ones are taking long
>>> and/or hanging.
>>>
>>>
>>> On 02/16/2016 02:59 PM, Alex Hall wrote:
>>>> Upon re-reading some of the docs, I realized that my problem may still
>>>> be that initial connection. The create-engine doesn't actually
>>>> *connect* to the database, it just sets things up. That means that my
>>>> actual connection happens later, when I try to reflect or use automap.
>>>> When that happens, the connection starts up and the script hangs. I'm
>>>> no closer to solving this, and would love to hear anyone's thoughts,
>>>> but at least I know that my thought of blaming reflect/automap is
>>>> likely incorrect.
>>>>
>>>> On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
>>>>> Hi list,
>>>>> Sorry for all the emails. I've determined that my script is actually
>>>>> connecting to the 400's test database. At least, a print statement
>>>>> placed just after the create_engine call is printing, so I guess we're
>>>>> good there.
>>>>>
>>>>> What I'm running into now is unresponsiveness when I try to reflect or
>>>>> automap the database so I can do some basic queries. As soon as I call
>>>>> either
>>>>> automap.prepare(dbEngine, reflect=True)
>>>>> or
>>>>> metadata = MetaData()
>>>>> metadata.reflect(dbEngine, only=['tableName'])
>>>>>
>>>>> the script stops, hanging there with no response at all. The same
>>>>> thing happened when I was trying to use an inspector on the engine.
>>>>> It's an AS400, so taking a few seconds is a very long time for it.
>>>>> This is being left to run for minutes and isn't doing anything. What,
>>>>> if anything did I do wrong syntactically? Is there a better way to
>>>>> check that my engine is actually ready to go, or some other check I
>>>>> should be making? The full script, minus anything sensitive, is below.
>>>>>
>>>>> import globals
>>>>> import logging
>>>>> from sqlalchemy import *
>>>>> from sqlalchemy.engine import reflection
>>>>> from sqlalchemy.ext.automap import automap_base
>>>>> from sqlalchemy.ext.declarative import declarative_base
>>>>> from sqlalchemy.orm import sessionmaker
>>>>>
>>>>> logger = logging.getLogger(globals.appName+"."+__name__)
>>>>>
>>>>> #set up the sqlalchemy objects
>>>>> logger.debug("Creating database engine, base, and session.")
>>>>> dbEngine =
>>>>> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
>>>>> print "connected"
>>>>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>>>>> Session
>>>>> session = Session() #lowercase s
>>>>> metadata = MetaData()
>>>>> logger.debug("Creating

Re: [sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
I tried that, hoping for a bit more insight into the problem. However,
unless I'm doing something wrong, I don't even get any queries. I get
my own print statements, then the script tries to connect and hangs.
I've added
dbEngine.connect()
just to be sure the problem is that first connection, and sure enough,
it hangs on that line.

On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> turning on echo=True inside create_engine() will show you what queries
> are emitted as they occur so you can see which ones are taking long
> and/or hanging.
>
>
> On 02/16/2016 02:59 PM, Alex Hall wrote:
>> Upon re-reading some of the docs, I realized that my problem may still
>> be that initial connection. The create-engine doesn't actually
>> *connect* to the database, it just sets things up. That means that my
>> actual connection happens later, when I try to reflect or use automap.
>> When that happens, the connection starts up and the script hangs. I'm
>> no closer to solving this, and would love to hear anyone's thoughts,
>> but at least I know that my thought of blaming reflect/automap is
>> likely incorrect.
>>
>> On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
>>> Hi list,
>>> Sorry for all the emails. I've determined that my script is actually
>>> connecting to the 400's test database. At least, a print statement
>>> placed just after the create_engine call is printing, so I guess we're
>>> good there.
>>>
>>> What I'm running into now is unresponsiveness when I try to reflect or
>>> automap the database so I can do some basic queries. As soon as I call
>>> either
>>> automap.prepare(dbEngine, reflect=True)
>>> or
>>> metadata = MetaData()
>>> metadata.reflect(dbEngine, only=['tableName'])
>>>
>>> the script stops, hanging there with no response at all. The same
>>> thing happened when I was trying to use an inspector on the engine.
>>> It's an AS400, so taking a few seconds is a very long time for it.
>>> This is being left to run for minutes and isn't doing anything. What,
>>> if anything did I do wrong syntactically? Is there a better way to
>>> check that my engine is actually ready to go, or some other check I
>>> should be making? The full script, minus anything sensitive, is below.
>>>
>>> import globals
>>> import logging
>>> from sqlalchemy import *
>>> from sqlalchemy.engine import reflection
>>> from sqlalchemy.ext.automap import automap_base
>>> from sqlalchemy.ext.declarative import declarative_base
>>> from sqlalchemy.orm import sessionmaker
>>>
>>> logger = logging.getLogger(globals.appName+"."+__name__)
>>>
>>> #set up the sqlalchemy objects
>>> logger.debug("Creating database engine, base, and session.")
>>> dbEngine =
>>> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
>>> print "connected"
>>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>>> Session
>>> session = Session() #lowercase s
>>> metadata = MetaData()
>>> logger.debug("Creating session.")
>>> print "Creating automap base"
>>> base = automap_base()
>>> print "setting up automapping"
>>> #base.prepare(dbEngine, reflect=True)
>>> metadata.reflect(dbEngine, only=['tableName'])
>>>
>>> def getOrderByNumber(orderID):
>>>   orders = base.classes.ORHED
>>>   order = session.query(orders).filter(orders.OAORNO==orderID).first()
>>>   print order.OAORNO
>>> #end def getOrderByNumber
>>>
>>> getOrderByNumber("AA111")
>>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
Upon re-reading some of the docs, I realized that my problem may still
be that initial connection. The create-engine doesn't actually
*connect* to the database, it just sets things up. That means that my
actual connection happens later, when I try to reflect or use automap.
When that happens, the connection starts up and the script hangs. I'm
no closer to solving this, and would love to hear anyone's thoughts,
but at least I know that my thought of blaming reflect/automap is
likely incorrect.

On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
> Hi list,
> Sorry for all the emails. I've determined that my script is actually
> connecting to the 400's test database. At least, a print statement
> placed just after the create_engine call is printing, so I guess we're
> good there.
>
> What I'm running into now is unresponsiveness when I try to reflect or
> automap the database so I can do some basic queries. As soon as I call
> either
> automap.prepare(dbEngine, reflect=True)
> or
> metadata = MetaData()
> metadata.reflect(dbEngine, only=['tableName'])
>
> the script stops, hanging there with no response at all. The same
> thing happened when I was trying to use an inspector on the engine.
> It's an AS400, so taking a few seconds is a very long time for it.
> This is being left to run for minutes and isn't doing anything. What,
> if anything did I do wrong syntactically? Is there a better way to
> check that my engine is actually ready to go, or some other check I
> should be making? The full script, minus anything sensitive, is below.
>
> import globals
> import logging
> from sqlalchemy import *
> from sqlalchemy.engine import reflection
> from sqlalchemy.ext.automap import automap_base
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import sessionmaker
>
> logger = logging.getLogger(globals.appName+"."+__name__)
>
> #set up the sqlalchemy objects
> logger.debug("Creating database engine, base, and session.")
> dbEngine =
> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
> print "connected"
> Session = sessionmaker(bind = dbEngine) #note that's a capital s on Session
> session = Session() #lowercase s
> metadata = MetaData()
> logger.debug("Creating session.")
> print "Creating automap base"
> base = automap_base()
> print "setting up automapping"
> #base.prepare(dbEngine, reflect=True)
> metadata.reflect(dbEngine, only=['tableName'])
>
> def getOrderByNumber(orderID):
>  orders = base.classes.ORHED
>  order = session.query(orders).filter(orders.OAORNO==orderID).first()
>  print order.OAORNO
> #end def getOrderByNumber
>
> getOrderByNumber("AA111")
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] reflection taking a very long time?

2016-02-16 Thread Alex Hall
Hi list,
Sorry for all the emails. I've determined that my script is actually
connecting to the 400's test database. At least, a print statement
placed just after the create_engine call is printing, so I guess we're
good there.

What I'm running into now is unresponsiveness when I try to reflect or
automap the database so I can do some basic queries. As soon as I call
either
automap.prepare(dbEngine, reflect=True)
or
metadata = MetaData()
metadata.reflect(dbEngine, only=['tableName'])

the script stops, hanging there with no response at all. The same
thing happened when I was trying to use an inspector on the engine.
It's an AS400, so taking a few seconds is a very long time for it.
This is being left to run for minutes and isn't doing anything. What,
if anything did I do wrong syntactically? Is there a better way to
check that my engine is actually ready to go, or some other check I
should be making? The full script, minus anything sensitive, is below.

import globals
import logging
from sqlalchemy import *
from sqlalchemy.engine import reflection
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

logger = logging.getLogger(globals.appName+"."+__name__)

#set up the sqlalchemy objects
logger.debug("Creating database engine, base, and session.")
dbEngine = 
create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
print "connected"
Session = sessionmaker(bind = dbEngine) #note that's a capital s on Session
session = Session() #lowercase s
metadata = MetaData()
logger.debug("Creating session.")
print "Creating automap base"
base = automap_base()
print "setting up automapping"
#base.prepare(dbEngine, reflect=True)
metadata.reflect(dbEngine, only=['tableName'])

def getOrderByNumber(orderID):
 orders = base.classes.ORHED
 order = session.query(orders).filter(orders.OAORNO==orderID).first()
 print order.OAORNO
#end def getOrderByNumber

getOrderByNumber("AA111")

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-16 Thread Alex Hall
You're onto something, I think. When I use a connection string of
"ibm_db_sa://user:pwd@AS400IP:DBAccessPort/DBName"
I get no errors. I don't actually get anything, though; my command
prompt is unresponsive, as though waiting for a script to finish, but
that's all. This seems to be the 400, though, because if I try to
telnet into it with the same address and port, I get the same (lack
of) response. This is progress, at least. Thanks for the help, and
hopefully I'll have it from here! Here's to my next question being
about doing things once I'm IN the database, rather than still
knocking on its door.

On 2/16/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Tue, Feb 16, 2016 at 10:27:40AM -0500, Alex Hall wrote:
>> I have pyodbc 3.0.10, ibm_db_sa 0.3.2, and ibm_db 2.0.6. I'm also
>> talking to people on the ibm_db list, and they suggested I re-install
>> ibm_db and ibm_db_sa according to the official tutorial, which uses
>> easy_install. I did so, but there was no change.
>>
>> As to pyodbc, I'm fine with not using it. Thus far, from the two lists
>> I'm on and more research, I thought I *had to* use it to get things
>> working right. Indeed, when I remove "+pyodbc" from my SA connection
>> string, the dbms_ver error goes away. However, it's replaced by an
>> error that the driver can't find the DSN name I give it, even though I
>> can see that DSN right in the IBM ODBC manager on this computer.
>> Someone mentioned 64-bit versus 32-bit; I'm using the 64-bit version
>> of the ODBC manager, and 64-bit Python. I'm not sure how else to tell
>> if the name of the DSN itself is in the correct format.
>>
>> The traceback is very long, but here it is in full:
>>
>> c:\python27\python.exe DBInterface2.py
>> Traceback (most recent call last):
> [...]
>>   File
>> "c:\python27\lib\site-packages\ibm_db_sa-0.3.2-py2.7.egg\ibm_db_sa\base.p
>> y", line 666, in initialize
>> self.dbms_ver = connection.connection.dbms_ver
>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>
> This traceback is still the dbms_ver thing, did you mean to post the
> other one?
>
> In any case, when you're using a URI in the form of
> "ibm_db_sa://user:pass@host/db_name", at least based on the example in
> the IBM docs [1], I'm guessing that you shouldn't use the ODBC DSN you
> have defined, but rather the server hostname or IP address directly.
> In this case it should be using the IBM DBAPI driver directly, without
> going through ODBC.
>
> Cheers,
>
> Michal
>
>
> [1]:
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.swg.im.dbclient.python.doc/doc/t0060891.html
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-16 Thread Alex Hall
I have pyodbc 3.0.10, ibm_db_sa 0.3.2, and ibm_db 2.0.6. I'm also
talking to people on the ibm_db list, and they suggested I re-install
ibm_db and ibm_db_sa according to the official tutorial, which uses
easy_install. I did so, but there was no change.

As to pyodbc, I'm fine with not using it. Thus far, from the two lists
I'm on and more research, I thought I *had to* use it to get things
working right. Indeed, when I remove "+pyodbc" from my SA connection
string, the dbms_ver error goes away. However, it's replaced by an
error that the driver can't find the DSN name I give it, even though I
can see that DSN right in the IBM ODBC manager on this computer.
Someone mentioned 64-bit versus 32-bit; I'm using the 64-bit version
of the ODBC manager, and 64-bit Python. I'm not sure how else to tell
if the name of the DSN itself is in the correct format.

The traceback is very long, but here it is in full:

c:\python27\python.exe DBInterface2.py
Traceback (most recent call last):
  File "DBInterface2.py", line 28, in 
getAllTables()
  File "DBInterface2.py", line 22, in getAllTables
dbInspector = reflection.Inspector.from_engine(dbEngine)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 135
, in from_engine
return Inspector(bind)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 109
, in __init__
bind.connect().close()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2018, in
connect
return self._connection_cls(self, **kwargs)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 72, in __
init__
if connection is not None else engine.raw_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2104, in
raw_connection
self.pool.unique_connection, _connection)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2074, in
_wrap_pool_connect
return fn()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 318, in unique_c
onnection
return _ConnectionFairy._checkout(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 713, in _checkou
t
fairy = _ConnectionRecord.checkout(pool)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 480, in checkout

rec = pool._do_get()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1060, in _do_get

self._dec_overflow()
  File "c:\python27\lib\site-packages\sqlalchemy\util\langhelpers.py", line 60,
in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1057, in _do_get

return self._create_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 323, in _create_
connection
return _ConnectionRecord(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 454, in __init__

exec_once(self.connection, self)
  File "c:\python27\lib\site-packages\sqlalchemy\event\attr.py", line 246, in ex
ec_once
self(*args, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\event\attr.py", line 256, in __
call__
fn(*args, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\util\langhelpers.py", line 1312
, in go
return once_fn(*arg, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\strategies.py", line 165
, in first_connect
dialect.initialize(c)
  File "c:\python27\lib\site-packages\sqlalchemy\connectors\pyodbc.py", line 154
, in initialize
super(PyODBCConnector, self).initialize(connection)
  File "c:\python27\lib\site-packages\ibm_db_sa-0.3.2-py2.7.egg\ibm_db_sa\base.p
y", line 666, in initialize
self.dbms_ver = connection.connection.dbms_ver
AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'


On 2/15/16, Simon King <si...@simonking.org.uk> wrote:
> What does the traceback say? That exact line would trigger an error much
> like the one you are seeing, if the object in “connection.connection” is a
> pyodbc.Connection and doesn’t have a “dbms_ver” attribute.
>
> Note that there are at least 3 packages that could be involved here:
>
> pyodbc (https://pypi.python.org/pypi/pyodbc)
>
> ibm_db (https://pypi.python.org/pypi/ibm_db/)
>
> ibm_db_sa (https://pypi.python.org/pypi/ibm_db_sa)
>
> What versions do you have of each of them? Note that
> https://github.com/ibmdb/python-ibmdbsa/tree/master/ibm_db_sa says that
> pyodbc support is experimental.
>
> Simon
>
>> On 15 Feb 2016, at 21:07, Alex Hall <ah...@autodist.com> wrote:
>>
>> I just downloaded a fresh copy of 0.3.2, just in case I had somehow
>> gotten an old version from Pip. I looked in base.py, and found:
>>
&

Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
I just downloaded a fresh copy of 0.3.2, just in case I had somehow
gotten an old version from Pip. I looked in base.py, and found:

def initialize(self, connection):
   super(DB2Dialect, self).initialize(connection)
self.dbms_ver = connection.connection.dbms_ver

While I'm not sure what I can do about it, it looks like this dbms_ver
property is definitely in the latest ibm_db_sa version. Am I getting
this from the wrong place, or confusing this with a different package
somehow? I *must* be missing something obvious.

On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
> An interesting development. I noticed that in site-packages\ibm_db_sa
> was pyodbc.py. Thinking that might be an older version, I renamed it,
> trying to force the import to use my installed version instead. It now
> says "cannot import name pyodbc". I thought Python searched the
> current directory, then the site-packages one, for modules? If so, and
> if I can import pyodbc with no errors in the shell, why would
> ibm_db_sa fail to import? This may be the problem--it was using an
> older version of pyodbc and can't find the newer one for some reason.
> Any ideas, or am I completely off track with this?
>
> On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
>> Thanks guys. I've checked the version I'm using, and it reports that
>> ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
>> installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
>> Is the former package causing a conflict somehow?
>>
>> On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
>>> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the
>>> previous
>>> version. dbms_ver is a feature specific of native ibm_db version of
>>> which
>>> not available in pyodbc.
>>>
>>> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>>>
>>>
>>> Salam,
>>>
>>> -Jaimy.
>>>
>>>
>>> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>>>
>>>> Hello list,
>>>> I've configured a DSN to a test version of my work's AS400 and I seem
>>>> to be able to connect just fine (Yes!) I'm now running into a problem
>>>> when I try to ask for a list of all tables. The line is:
>>>>
>>>>  dbInspector = inspect(dbEngine)
>>>>
>>>> The traceback is very long, and I can paste it if you want, but it
>>>> ends with this:
>>>>
>>>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>>>
>>>> I'm unable to find anything about this online, so thought I'd check
>>>> with this list. Here's my connection:
>>>>
>>>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>>>
>>>> If anyone knows what is causing this, I'd appreciate your thoughts.
>>>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>>>> have all the latest versions of everything. I'm on Windows 7x64,
>>>> Python 2.7 (latest).
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups
>>>> "sqlalchemy" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an
>>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
An interesting development. I noticed that in site-packages\ibm_db_sa
was pyodbc.py. Thinking that might be an older version, I renamed it,
trying to force the import to use my installed version instead. It now
says "cannot import name pyodbc". I thought Python searched the
current directory, then the site-packages one, for modules? If so, and
if I can import pyodbc with no errors in the shell, why would
ibm_db_sa fail to import? This may be the problem--it was using an
older version of pyodbc and can't find the newer one for some reason.
Any ideas, or am I completely off track with this?

On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
> Thanks guys. I've checked the version I'm using, and it reports that
> ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
> installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
> Is the former package causing a conflict somehow?
>
> On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
>> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the previous
>> version. dbms_ver is a feature specific of native ibm_db version of which
>> not available in pyodbc.
>>
>> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>>
>>
>> Salam,
>>
>> -Jaimy.
>>
>>
>> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>>
>>> Hello list,
>>> I've configured a DSN to a test version of my work's AS400 and I seem
>>> to be able to connect just fine (Yes!) I'm now running into a problem
>>> when I try to ask for a list of all tables. The line is:
>>>
>>>  dbInspector = inspect(dbEngine)
>>>
>>> The traceback is very long, and I can paste it if you want, but it
>>> ends with this:
>>>
>>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>>
>>> I'm unable to find anything about this online, so thought I'd check
>>> with this list. Here's my connection:
>>>
>>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>>
>>> If anyone knows what is causing this, I'd appreciate your thoughts.
>>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>>> have all the latest versions of everything. I'm on Windows 7x64,
>>> Python 2.7 (latest).
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
Thanks guys. I've checked the version I'm using, and it reports that
ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
Is the former package causing a conflict somehow?

On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the previous
> version. dbms_ver is a feature specific of native ibm_db version of which
> not available in pyodbc.
>
> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>
>
> Salam,
>
> -Jaimy.
>
>
> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>
>> Hello list,
>> I've configured a DSN to a test version of my work's AS400 and I seem
>> to be able to connect just fine (Yes!) I'm now running into a problem
>> when I try to ask for a list of all tables. The line is:
>>
>>  dbInspector = inspect(dbEngine)
>>
>> The traceback is very long, and I can paste it if you want, but it
>> ends with this:
>>
>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>
>> I'm unable to find anything about this online, so thought I'd check
>> with this list. Here's my connection:
>>
>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>
>> If anyone knows what is causing this, I'd appreciate your thoughts.
>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>> have all the latest versions of everything. I'm on Windows 7x64,
>> Python 2.7 (latest).
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-12 Thread Alex Hall
Hello list,
I've configured a DSN to a test version of my work's AS400 and I seem
to be able to connect just fine (Yes!) I'm now running into a problem
when I try to ask for a list of all tables. The line is:

 dbInspector = inspect(dbEngine)

The traceback is very long, and I can paste it if you want, but it
ends with this:

AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'

I'm unable to find anything about this online, so thought I'd check
with this list. Here's my connection:

dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")

If anyone knows what is causing this, I'd appreciate your thoughts.
I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
have all the latest versions of everything. I'm on Windows 7x64,
Python 2.7 (latest).

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-12 Thread Alex Hall
Thanks so much for your reply--this really helps! I asked the people
at work, and was told that my machine does, in fact, have some sort of
IBM manager installed. (Can you tell I'm new to this technology and
this job?) Using it, I was able to create a DSN to the test database
and, it seems, connect. I'm getting an error when I call

 dbInspector = inspect(dbEngine)

but at least I'm getting that far. I'll ask about the error in a
separate thread, since more people are likely to have run across that
than seem to have experience with the 400 and IBM's wrapper.

On 2/12/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Thu, Feb 11, 2016 at 01:16:03PM -0500, Alex Hall wrote:
>> I've done more research on this topic. There's a lot out there about
>> using MSSQL with SA, but next to nothing about using ibm_db_sa or
>> specifying drivers.
>>
>> I have pyodbc installed. I downloaded IBM's ODBC zip file, and I've
>> put db2odbc64.dll in my project folder, but don't know how to point SA
>> or pyodbc to it. I've tried several versions of
>> "?driver="db2odbc64.dll"" appended to my connection string, but I keep
>> getting an error: "data source not found and no default driver
>> specified". It doesn't even time out anymore, it just errors out
>> immediately. I've also tried "ibm_db_sa+pyodbc://" to start the
>> string, but that fails too.
>>
>> This *must* be a simple thing, but I can't work out what to do, and
>> Google is failing me. If anyone has any ideas, I'd greatly appreciate
>> hearing them. Thanks, and sorry to keep bugging the list about this. I
>> just have no other options at the moment and I need to get this
>> working soon.
>
> Hi Alex,
>
> Unfortunately, I can't offer you any specific help with IBM DB, but
> judging by the number of replies, it seems nobody on this list can, so
> I only have some stab-in-the-dark suggestions.
>
> In my experience with enterprise software, *nothing* is ever a simple
> thing, not even seemingly trivial operations, such as connecting to a
> database.
>
> You can try using either pyodbc, or the ibm_db driver – in both cases,
> those are just the Python DBAPI drivers which take in textual SQL
> statements, send them to the database in the low-level network
> protocol, and present the results as dumb Python objects. SQLAlchemy
> is a layer on top of them. That means, the first step would be to get
> your Python runtime to open a raw pyodbc, or ibm_db connection to the
> server, and be able to execute raw SQL statements there. Only after
> you confirm this works you can move on to getting SQLAlchemy to work
> with the DBAPI driver.
>
>
> In my understanding, pyodbc is a wrapper around the library unixodbc.
> I'm not sure how it's implemented on Windows – whether it's a port of
> unixodbc, or it uses a different ODBC implementation there. Whatever
> the case, though, on Linux with unixodbc, when I wanted to connect to
> MS SQL, I had to register a low-level driver with the unixodbc
> library. I had to edit a system-wide configuration file
> (/etc/unixODBC/odbcinst.ini), and create a new driver definition in
> there to make unixodbc recognize the FreeTDS driver I'm using as the
> low-level protocol implementation.
>
> I have no idea what low-level ODBC driver is required to connect to
> IBM DB, I'm afraid you'll have to figure that out on your own. The
> official IBM docs at
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/c0007944.html?cp=SSEPGG_9.7.0%2F4-0-4
> seem to imply that IBM provides their own low-level ODBC driver which
> you'll need to have in place in order to be able to connect to the
> server using ODBC.
>
> In any case, I would expect that the ODBC machinery would expect to
> have the db2odbc64.dll registered somehow with a symbolic name in some
> configuration file, registry, or whatever, and that would be the
> string you're expected to pass as the driver name in the ODBC
> connection string.
>
> Actually, I think with ODBC, you're expected to define all database
> servers in a system-wide configuration file or some such, give each
> one of them a nickname (“DSN”), and just use that to connect to the
> database.
>
>
> The other option is to use the ibm_db Python DBAPI driver. I expect
> you have already seen the official docs:
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.swg.im.dbclient.python.doc/doc/c0054366.html
> Have you tried following the set-up steps in that section there? Try
> to first get it into a state where you can connect to the database
> with ``ibm_db.connect()``, and successfully execute SQL statements
> from the Python shell

Re: [sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-11 Thread Alex Hall
I think I'm confused. Isn't Pyodbc an alternative to SQLAlchemy? If
not, how would the two work together? I just looked through the
'Getting Started' and 'API' docs for Pyodbc, and I don't see any
examples. I found some samples online of people using the two
together, but I don't quite follow how the process works. Thanks.

On 2/10/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
> Connecting to AS400 from native ibm_db_dbi driver would need db2 connect
> which is a separated product from IBM. Use the ibm_db_sa pyodbc driver
> instead, or jdbc (jython) if you don't have db2 connect installed on your
> machine.
>
> Salam,
>
> -Jaimy
> On Feb 11, 2016 01:50, "Alex Hall" <ah...@autodist.com> wrote:
>
>> Hello list,
>> I sent this to the ibm_db list yesterday, but no one has responded
>> yet. Since it's as much ibm_db as SA, I thought I'd try here as well
>> in case any of you have used an AS400 before. I have ibm_db,
>> ibm_db_sa, the latest sqlalchemy, and Python 2.7 (latest) installed. I
>> can talk to SQLite with no trouble, it's talking to this 400 that
>> won't work. Anyway...
>>
>> I'm finally ready to hook my app to the 400 instead of the local
>> SQLite database I've been using for testing. Here's my simple script:
>>
>> import globals
>> import logging
>> from sqlalchemy import *
>> from sqlalchemy.ext.declarative import declarative_base
>> from sqlalchemy.orm import sessionmaker
>>
>> #set up the sqlalchemy objects
>> dbEngine = create_engine('ibm_db_sa://
>> username:passw...@mysite.com:8471/database')
>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>> Session
>> session = Session() #lowercase s
>> base = declarative_base()
>>
>> def getAllTables():
>>  dbInspector = inspect(dbEngine)
>>  for table in dbInspector.get_table_names():
>>   print table
>>
>> getAllTables()
>>
>> When I run that, it waits thirty seconds or so, then tells me there
>> was an error. I'll paste the entire traceback below. Sorry in
>> advance--it's pretty long.
>>
>> Microsoft Windows [Version 6.1.7601]
>> Copyright (c) 2009 Microsoft Corporation.  All rights reserved.
>>
>> C:\Users\admin\Dropbox\Autodist\jobs>c:\python27\python.exe
>> DBInterface2.py
>> Traceback (most recent call last):
>>   File "DBInterface2.py", line 24, in 
>> getAllTables()
>>   File "DBInterface2.py", line 18, in getAllTables
>> dbInspector = inspect(dbEngine)
>>   File "c:\python27\lib\site-packages\sqlalchemy\inspection.py", line 63,
>> in ins
>> pect
>> ret = reg(subject)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 139
>> , in _insp
>> return Inspector.from_engine(bind)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 135
>> , in from_engine
>> return Inspector(bind)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 109
>> , in __init__
>> bind.connect().close()
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2018, in
>> connect
>> return self._connection_cls(self, **kwargs)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 72,
>> in __
>> init__
>> if connection is not None else engine.raw_connection()
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2104, in
>> raw_connection
>> self.pool.unique_connection, _connection)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2078, in
>> _wrap_pool_connect
>> e, dialect, self)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 1405, in
>> _handle_dbapi_exception_noconnection
>> exc_info
>>   File "c:\python27\lib\site-packages\sqlalchemy\util\compat.py", line
>> 200, in r
>> aise_from_cause
>> reraise(type(exception), exception, tb=exc_tb)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2074, in
>> _wrap_pool_connect
>> return fn()
>>   File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 318, in
>> unique_c
>> onnection
>> return _ConnectionFairy._checkout(self)
>>   File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 713, in
>> _checkou
>> 

  1   2   3   4   >