[sqlalchemy] some gotchas in SQLAlchemy 2.0 and how to fix

2023-02-23 Thread Victor Olex
Hi guys,

I came across a bunch of issues in a relatively simple code I got when 
upgraded to SQLA 2.0. Below I provided issues seen and solutions, but very 
much welcome any corrections. I thought this might come in handy for some. 
As a side note not since the move from like 0.4 to 0.5 or 0.6 did I 
experience SQLA working so different.

TypeError: MetaData.__init__() got an unexpected keyword argument 'bind'
meta = MeteaData(bind=e); meta.reflect(...) -> meta = MetaData(); 
meta.reflect(bind=e, ...)

TypeError: Connection.execute() got an unexpected keyword argument
connection.execute(q, par1=v1, par2=v2) -> connection.execute(q, 
dict(par1=v1, par2=v2))

sqlalchemy.exc.InvalidRequestError: This connection has already initialized 
a SQLAlchemy Transaction() object via begin() or autobegin; can't call 
begin() here unless rollback() or commit() is called first.
conn.execute() automatically starts transaction. If you use contect mgr 
with conn.begin() after that without commit() or rollback() you will get 
that error. Probably best to always use context manager? I would have 
preferred an option to not throw when connection is already in transaction, 
i.e. conn.begin(existing_ok=True).

https://stackoverflow.com/questions/70067023/pandas-and-sqlalchemy-df-to-sql-with-sqlalchemy-2-0-future-true-throws-an-err
TypeError: tuple indices must be integers or slices, not str
cur.one()['DocumentID'] -> cur.one().DocumentID

An insert statement into a varchar column with bound parameter being of 
type float resulted in different fomatting. Previouisly '0.1', currently
'0.11'. PyODBC, MSSQL, fast_executemany=False.

AttributeError: 'Select' object has no attribute 'execute'
query.execute() does not work, must use conn.execute(query)

Kind regards,

V.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/430d3237-e718-444c-beaf-8385bd5d0aean%40googlegroups.com.


Re: [sqlalchemy] boud parameter to NCHAR column in Oracle

2019-10-12 Thread Victor Olex
That's for CHAR not NCHAR. I will open a bug report.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/7bdca9c9-d0eb-4a5e-93c7-ee728399bea5%40googlegroups.com.


Re: [sqlalchemy] boud parameter to NCHAR column in Oracle

2019-10-11 Thread Victor Olex
A bit of advocacy from my side on cx_Oracle: 
https://github.com/oracle/python-cx_Oracle/issues/365

Mike, there's something you might want to look at. We have this monkey 
patch on SQLAlchemy in our recent code, which was necessary to get the 
correct behavior for NCHAR columns.

from sqlalchemy.dialects.oracle import cx_oracle
from sqlalchemy.sql sqltypes

class _OracleNChar(sqltypes.NCHAR):
def get_dbapi_type(self, dbapi):
return dbapi.FIXED_NCHAR

cx_oracle._OracleNChar = _OracleNChar
cx_oracle.dialect.colspecs[sqltypes.NCHAR] = _OracleNChar


On Thursday, October 3, 2019 at 12:26:40 PM UTC-4, Mike Bayer wrote:
>
> With cx_Oracle, you really should likely be using typing for everything as 
> cx_Oracle is pretty sensitive as well to the cursor.setinputsizes() 
> settings, which SQLAlchemy will do for you if you send in typed bound 
> parameters; additionally, you can set typing information for result columns 
> also (use text().columns())  which SQLAlchemy uses in order to set up 
> cursor.outputtypehandler.   Both of these are cx_Oracle things that are 
> unfortunately unique to this DBAPI and they are hugely important; even the 
> django ORM has to use outputtypehandler.   
>
> On Thu, Oct 3, 2019, at 12:20 PM, Mike Bayer wrote:
>
> hey there -
>
> you should apply typing behavior which should be safe to apply to any CHAR 
> like this:
>
> class PaddedChar(TypeDecorator):
> impl = NCHAR
>
> def process_bind_param(self, value, dialect):
> if value is not None:
> value = value + (" " * (self.impl.length - len(value)))
> return value
>
> result = conn.execute(
> text("select * from nchartable where id = :id").bindparams(
> bindparam("id", type_=PaddedChar(4))
> ),
> {"id": "1"},
> ).fetchall()
>
>
>
>
> On Thu, Oct 3, 2019, at 4:47 AM, mdob wrote:
>
> Hi everyone, 
>
> There's IMO unusual behavior in Oracle when using bound parameters on 
> NCHAR column. 
>
> from sqlalchemy import create_engine
> from sqlalchemy.sql import text
>
>
> e = create_engine('oracle://chinook:p4ssw0rd@localhost/xe')
>
>
> result = e.execute(
> text("select * from nchartable where id = '1'"),
> ).fetchall()
>
>
> print 'hardcoded:', list(result)
>
>
> result = e.execute(
> text('select * from nchartable where id = :id'),
> {'id': '1'}
> ).fetchall()
>
>
> print 'trimmed:', list(result)
>
>
>
>
> result = e.execute(
> text('select * from nchartable where id = :id'),
> {'id': '1   '}  # padded with spaces
> ).fetchall()
>
>
> print 'padded:', list(result)
>
>
> output
> hardcoded: [(u'1   ',)]
> trimmed: []
> padded: [(u'1   ',)]
>
> When value is hardcoded or bound parameter is padded the statement works 
> as expected, but it doesn't when value is trimmed. Although the value in 
> where statement is just like in hardcoded case.
>
> As explained on cxOracle mailing list 
> https://sourceforge.net/p/cx-oracle/mailman/message/36775002/ it's a 
> deliberate decision to bind values str to VARCHAR2 and unicode to 
> NVARCHAR2. Unfortunately that doesn't work correctly for filtering on CHAR 
> and NCHAR. 
>
> I know it's more cxOracle matter but how do you deal with this in 
> SQLAlchemy? Expecially when working with text SQL statement which may be 
> complex e.g. use stored procedures. 
>
> Thanks for any clues, 
> Michal
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/475bbd60-f3d8-486b-a640-5fd58d679af6%40googlegroups.com
>  
> 
> .
>
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/c313d214-d5a2-455c-b661-3a7cdfb379f2%40www.fastmail.com
>  
> 
> .
>
>
>

-- 
SQLAlchemy - 

[sqlalchemy] Re: boud parameter to NCHAR column in Oracle

2019-10-03 Thread Victor Olex
I'd add that other dialects such as pyodbc+mssql work as expected, which is 
to ignore trailing white space in fixed width character fields for 
comparison purposes.

On Thursday, October 3, 2019 at 4:47:41 AM UTC-4, mdob wrote:
>
> Hi everyone, 
>
> There's IMO unusual behavior in Oracle when using bound parameters on 
> NCHAR column. 
>
> from sqlalchemy import create_engine
> from sqlalchemy.sql import text
>
>
> e = create_engine('oracle://chinook:p4ssw0rd@localhost/xe')
>
>
> result = e.execute(
> text("select * from nchartable where id = '1'"),
> ).fetchall()
>
>
> print 'hardcoded:', list(result)
>
>
> result = e.execute(
> text('select * from nchartable where id = :id'),
> {'id': '1'}
> ).fetchall()
>
>
> print 'trimmed:', list(result)
>
>
>
>
> result = e.execute(
> text('select * from nchartable where id = :id'),
> {'id': '1   '}  # padded with spaces
> ).fetchall()
>
>
> print 'padded:', list(result)
>
>
> output
> hardcoded: [(u'1   ',)]
> trimmed: []
> padded: [(u'1   ',)]
>
> When value is hardcoded or bound parameter is padded the statement works 
> as expected, but it doesn't when value is trimmed. Although the value in 
> where statement is just like in hardcoded case.
>
> As explained on cxOracle mailing list 
> https://sourceforge.net/p/cx-oracle/mailman/message/36775002/ it's a 
> deliberate decision to bind values str to VARCHAR2 and unicode to 
> NVARCHAR2. Unfortunately that doesn't work correctly for filtering on CHAR 
> and NCHAR. 
>
> I know it's more cxOracle matter but how do you deal with this in 
> SQLAlchemy? Expecially when working with text SQL statement which may be 
> complex e.g. use stored procedures. 
>
> Thanks for any clues, 
> Michal
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/68f07e23-c432-4bf0-9d55-30c76c237fb8%40googlegroups.com.


[sqlalchemy] limit in context of entities not rows

2019-07-04 Thread Victor Olex
Using ORM querying what is the best practice for limiting the output to a 
given number of resulting *entities*?

Consider this model:

from sqlalchemy import Column, DateTime, String, Integer, ForeignKey, func
from sqlalchemy.orm import relationship, backref
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
  
Base = declarative_base()
 
class Department(Base):
__tablename__ = 'department'
id = Column(Integer, primary_key=True)
name = Column(String)

def __repr__(self):
return f'Department({self.id}, {self.name})'

 
class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String)
department_id = Column(Integer, ForeignKey('department.id'))
# Use cascade='delete,all' to propagate the deletion of a Department 
onto its Employees
department = relationship(
Department,
backref=backref('employees', uselist=True,
 cascade='delete,all'))
engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine)
session = Session()
session.add(Department(id=1, name='IT'))
session.add(Department(id=2, name='Finance'))
session.add(Department(id=3, name='Sales'))
session.add(Employee(id=1, name='Victor', department_id=1))
session.add(Employee(id=2, name='Michal', department_id=1))
session.add(Employee(id=3, name='Kinga', department_id=2))
session.add(Employee(id=4, name='Andy', department_id=3))
session.commit()

Now, let's query for the list of Departments given some criteria on the 
Employee: 
session.query(Department).join(Employee).filter(Employee.name.in_(['Andy', 
'Kinga', 'Victor', 'Michal'])).all()

As expected we get:

[Department(1, IT), Department(2, Finance), Department(3, Sales)]

Now suppose our intent is to limit the number of results, and we would 
prefer to use a LIMIT clause to do the filtering on the database side:

session.query(Department).join(Employee).filter(Employee.name.in_(['Andy', 
'Kinga', 'Victor', 'Michal'])).limit(2).all()

This (in my case) resulted in:

[Department(1, IT)]

The reason for this is that the limit gets applied to the resulting *rows* from 
the joined tables, which happen to begin with the two employees from IT 
(this is non-deterministic unless order by is also used). Since both 
represent the same entity, only one instance is returned.

Other approaches (and their shortcomings) are:

   1. Using a DISTINCT clause prior to LIMIT - won't work if any field is 
   include non-comparable types like IMAGE, BLOB
   2. Using a subquery on the Department with LIMIT - this may filter out 
   departments, which would otherwise match

What works is wrapping the entire query in a subquery, but only selecting a 
DISTINCT Department.id field, and using that in the IN clause. It seems 
quite convoluted and some databases might not do great to optimize this 
away.

sq = session.query(Department.id).join(Employee).filter(Employee.name.in_([
'Andy', 'Kinga', 'Victor', 'Michal'])).distinct().limit(2).subquery()
session.query(Department).filter(Department.id.in_(sq)).all()


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d4e19b87-3442-4b5e-801a-3005fc4c433e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] binding parameters in quotes

2019-03-22 Thread Victor Olex
It will however produce parameter for certain string literals in a valid 
query. It may be a minute detail, but since you have that regex for 
detecting parameters it might well avoid that (unless there is a case).

>>> from sqlalchemy.sql import text
>>> q = text("SELECT * from T WHERE C1 = :param1 and C2 = 'Group :A'")
>>> print(q.compile())
SELECT * from T WHERE C1 = :param1 and C2 = 'Group :A'
>>> print(q.compile().params)
{'param1': None, 'A': None}


On Thursday, March 21, 2019 at 4:11:45 PM UTC-4, Mike Bayer wrote:
>
> On Thu, Mar 21, 2019 at 3:33 PM Victor Olex 
> > wrote: 
> > 
> > Thanks Mike, though the question is valid - why does regex in SQLAlchemy 
> allow for discovering parameter token inside quotes? Have you seen a 
> legitimate case for that? 
>
> the regex in SQLAlchemy is not a SQL parser, it's just putting bound 
> parameters where the user typed them. It does not detect invalid 
> SQL, that's the database's job. 
>
> > 
> > On Wednesday, March 20, 2019 at 9:58:58 AM UTC-4, Mike Bayer wrote: 
> >> 
> >> On Wed, Mar 20, 2019 at 7:59 AM mdob  wrote: 
> >> > 
> >> > Hi, 
> >> > 
> >> > 
> >> > Is it correct behavior that parameter placeholders in quotes e.g. 
> SELECT * FROM Artist WHERE Name LIKE "%:pattern%" are recognized as valid 
> parameters? 
> >> > 
> >> > 
> >> > from sqlalchemy.sql import text 
> >> > from sqlalchemy.dialects import sqlite 
> >> > from sqlalchemy import create_engine 
> >> > engine = create_engine('sqlite:home/mike/Chinook.sqlite', 
> echo=True) 
> >> > 
> >> > 
> >> > s = 'SELECT * FROM Artist WHERE Name LIKE "%:pattern%"' 
> >> > q = text(s) 
> >> > 
> >> > c = q.compile() 
> >> > print c.params 
> >> > 
> >> > {'pattern': None} 
> >> > 
> >> > 
> >> > If parameter is provided 
> >> > engine.execute(q, {'pattern': 'foo'}) 
> >> > 
> >> > engine echo: 
> >> > 
> >> > 2019-03-20 12:44:14,668 INFO sqlalchemy.engine.base.Engine SELECT * 
> FROM Artist WHERE Name LIKE '%?' 
> >> > 2019-03-20 12:44:14,669 INFO sqlalchemy.engine.base.Engine ('foo',) 
> >> > 
> >> > and error is raised 
> >> > 
> >> > 
> >> > ProgrammingError: (sqlite3.ProgrammingError) Incorrect number of 
> bindings supplied. The current statement uses 0, and there are 1 supplied. 
> [SQL: u'SELECT * FROM Artist WHERE Name LIKE "%?%"'] [parameters: ('foo',)] 
> (Background on this error at: http://sqlalche.me/e/f405) 
> >> > 
> >> > 
> >> > When executed without parameters 
> >> > 
> >> > engine.execute(q, {'pattern': 'foo'}) 
> >> > 
> >> > different error is raised 
> >> > 
> >> > 
> >> > StatementError: (sqlalchemy.exc.InvalidRequestError) A value is 
> required for bind parameter 'pattern' [SQL: u'SELECT * FROM Artist WHERE 
> Name LIKE "%?%"'] (Background on this error at: http://sqlalche.me/e/cd3x) 
>
> >> > 
> >> > It feels to me like finding parameters in 
> https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/sql/compiler.py#L895
>  
> should exclude placeholders inside quotes. 
> >> 
> >> This is not how bound parameters work in relational databases.  Bound 
> >> parameters are not Python substitution strings, they are more often 
> >> than not processed by the server, or at least have to behave as though 
> >> they were.  This means you cannot assume a parameter is substituted 
> >> inside of a value as you are doing here.  The correct syntax for what 
> >> you are trying to do is: 
> >> 
> >> s = 'SELECT * FROM Artist WHERE Name LIKE '%' || :pattern || '%' 
> >> 
> >> that is, the concatenation you are doing is explicit in SQL. 
> >> 
> >> 
> >> > 
> >> > Michal 
> >> > 
> >> > -- 
> >> > SQLAlchemy - 
> >> > The Python SQL Toolkit and Object Relational Mapper 
> >> > 
> >> > http://www.sqlalchemy.org/ 
> >> > 
> >> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> >> > --- 
> >> > You received this message because you are subscribed to the Google 
> Gr

Re: [sqlalchemy] binding parameters in quotes

2019-03-21 Thread Victor Olex
Thanks Mike, though the question is valid - why does regex in SQLAlchemy 
allow for discovering parameter token inside quotes? Have you seen a 
legitimate case for that?

On Wednesday, March 20, 2019 at 9:58:58 AM UTC-4, Mike Bayer wrote:
>
> On Wed, Mar 20, 2019 at 7:59 AM mdob > 
> wrote: 
> > 
> > Hi, 
> > 
> > 
> > Is it correct behavior that parameter placeholders in quotes e.g. SELECT 
> * FROM Artist WHERE Name LIKE "%:pattern%" are recognized as valid 
> parameters? 
> > 
> > 
> > from sqlalchemy.sql import text 
> > from sqlalchemy.dialects import sqlite 
> > from sqlalchemy import create_engine 
> > engine = create_engine('sqlite:home/mike/Chinook.sqlite', echo=True) 
> > 
> > 
> > s = 'SELECT * FROM Artist WHERE Name LIKE "%:pattern%"' 
> > q = text(s) 
> > 
> > c = q.compile() 
> > print c.params 
> > 
> > {'pattern': None} 
> > 
> > 
> > If parameter is provided 
> > engine.execute(q, {'pattern': 'foo'}) 
> > 
> > engine echo: 
> > 
> > 2019-03-20 12:44:14,668 INFO sqlalchemy.engine.base.Engine SELECT * FROM 
> Artist WHERE Name LIKE '%?' 
> > 2019-03-20 12:44:14,669 INFO sqlalchemy.engine.base.Engine ('foo',) 
> > 
> > and error is raised 
> > 
> > 
> > ProgrammingError: (sqlite3.ProgrammingError) Incorrect number of 
> bindings supplied. The current statement uses 0, and there are 1 supplied. 
> [SQL: u'SELECT * FROM Artist WHERE Name LIKE "%?%"'] [parameters: ('foo',)] 
> (Background on this error at: http://sqlalche.me/e/f405) 
> > 
> > 
> > When executed without parameters 
> > 
> > engine.execute(q, {'pattern': 'foo'}) 
> > 
> > different error is raised 
> > 
> > 
> > StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required 
> for bind parameter 'pattern' [SQL: u'SELECT * FROM Artist WHERE Name LIKE 
> "%?%"'] (Background on this error at: http://sqlalche.me/e/cd3x) 
> > 
> > It feels to me like finding parameters in 
> https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/sql/compiler.py#L895
>  
> should exclude placeholders inside quotes. 
>
> This is not how bound parameters work in relational databases.  Bound 
> parameters are not Python substitution strings, they are more often 
> than not processed by the server, or at least have to behave as though 
> they were.  This means you cannot assume a parameter is substituted 
> inside of a value as you are doing here.  The correct syntax for what 
> you are trying to do is: 
>
> s = 'SELECT * FROM Artist WHERE Name LIKE '%' || :pattern || '%' 
>
> that is, the concatenation you are doing is explicit in SQL. 
>
>
> > 
> > Michal 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflecting stored procedure names, enums, and the like

2018-03-09 Thread Victor Olex
Reflecting stored procedure as first class model objects could be very 
useful. I would imagine those objects be Python callable bound to an engine 
or session.

On Tuesday, August 16, 2016 at 10:28:25 AM UTC-4, Mike Bayer wrote:
>
>
>
> On 08/16/2016 03:33 AM, Chris Withers wrote: 
> > Gotcha, would there be any interest in adding StoredProcedure objects? 
>
> I wouldn't rush to doing that.   it's not clear what these objects would 
> do that DDL() doesn't and also I'd want to address triggers at the same 
> time.  See 
> http://alembic.zzzcomputing.com/en/latest/cookbook.html#replaceable-objects 
> for some of what would be needed. 
> > 
> > For the immediate need of "drop everything", your suggestion sounds like 
> > the best way forward, but I wonder if solving the more generic 
> > "reflecting stored procedures" problem might help with other things... 
>
> just the names is much simpler.  inspector.get_stored_procedure_names(), 
> much easier add. 
>
>
> > 
> > Have any tools emerged for comparing two databases' schemas? I guess the 
> > stuff used to auto-generate migrations must largely solve this problem? 
>
> alembic's API can be used for that to a partial extent.  there are also 
> some other rough approaches on pypi some of which have SQLAlchemy in 
> there and some do not. 
>
>
>
> > 
> > Sorry, I'm a bit rambly this morning... 
> > 
> > 
> > On 15/08/2016 14:38, Mike Bayer wrote: 
> >> 
> >> 
> >> We should have reflection for PG ENUM in there already.   SP names and 
> >> such, there's no "stored procedure" object, so if you wanted it to 
> >> look like the thing you see here you'd make a StoredProcedure() class 
> >> that's a subclass of DDLElement, etc.  I don't know that there's a 
> >> hook to add new methods to the Inspector() as of yet though.  If 
> >> you're just looking for a "drop everything" method I'd probably forego 
> >> all that boilerplate and just have "drop_pg_sps", "drop_mysql_sps", 
> >> etc. functions and key them off of engine.name in a dictionary or 
> >> something. 
> >> 
> >> 
> >> 
> >> On 08/15/2016 06:16 AM, Chris Withers wrote: 
> >>> Hi All, 
> >>> 
> >>> What's the best way (preferably database agnostic) to reflect stored 
> >>> procedure names, enums names and other non-table-specific items? 
> >>> 
> >>> I'm trying to improve this function, which is based on Mike's original 
> >>> recipe: 
> >>> 
> >>> 
> https://github.com/Mortar/mortar_rdb/blob/eb99d549be02643d4d670db2ee52b93b0c386fb4/mortar_rdb/__init__.py#L134
>  
> >>> 
> >>> 
> >>> 
> >>> I'd like to expand to to basically delete as much as possible in a 
> >>> database to give a clean starting ground for tests... 
> >>> 
> >>> Chris 
> >>> 
> >> 
> > 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inheriting from mapped classes

2014-06-26 Thread Victor Olex
I read it, but could you illustrate it with a sample code based on the 
classic User/Addresses example?

On Wednesday, June 25, 2014 11:45:12 AM UTC-4, Jonathan Vanasco wrote:


 On Tuesday, June 24, 2014 9:40:02 PM UTC-4, Victor Olex wrote:

 What I aiming for is to provide users a library of base class(es), which 
 are mapped using SQLAlchemy. The classes are then meant to be extended by 
 users with business logic. I am not hell bent on using inheritance for 
 this, but for now I went with your __abstract__ = True solution only in a 
 somewhat inverted way. 


 There's another thread from within the past 2 weeks from someone else 
 trying to tackle this problem. 

 I mentioned in that post, and I'll mention again here -- the best method I 
 found was to use a form of a registry pattern --  where child classes 
 inherit from base class and mention any overrides + the types of 
 relationships they require or provide. As the classes are initialized, this 
 data is recorded in a registry.  after initialization, relationships are 
 mapped onto the classes uses the data in the registry.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inheriting from mapped classes

2014-06-24 Thread Victor Olex
So, what is the right idiom for building SQLAlchemy persistence into 
classes that need to do more than just that i.e. have run-time state. I was 
hoping that deriving from SQLAlchemy model classes, but that does not seem 
to be it. Another option would be to encapsulate a model class within the 
runtime class, but that way we need to wrap SQLAlchemy session and 
queries functionality into some helper functions.

On Friday, May 30, 2014 2:11:53 PM UTC-4, Michael Bayer wrote:

 yep… here’s the error:

 sqlalchemy.orm.exc.FlushError: Attempting to flush an item of type class 
 '__main__.Thinker' as a member of collection Address.user. Expected an 
 object of type class '__main__.User' or a polymorphic subclass of this 
 type. If class '__main__.Thinker' is a subclass of class 
 '__main__.User', configure mapper Mapper|User|users to load this subtype 
 polymorphically, or set enable_typechecks=False to allow any subtype to be 
 accepted for flush. 


 enable_typechecks=False disables this check:

 user = relationship(User, enable_typechecks=False,
 backref=backref('addresses', order_by=id))

 it just means that later on, when you hit some_address.user, you may get a 
 User back, not a Thinker (or you will, if it hasn’t been expired.   you 
 can’t rely on it being consistent).   If that’s OK, then set the flag - it 
 just wants to check that this is what you intend.




 On May 30, 2014, at 1:51 PM, Victor Olex victo...@vtenterprise.com 
 javascript: wrote:

 Hello all, long time no see...

 Is it OK to create classes, which inherit from mapped classes, but are not 
 meant to be persited and how to do it as to avoid FlushError on related 
 classes?

 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import Session, relationship, backref

 Base = declarative_base()

 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
 name = Column(String(50))
 fullname = Column(String(50))
 password = Column(String(12))

 class Address(Base):
 __tablename__ = 'addresses'
 id = Column(Integer, primary_key=True)
 email_address = Column(String, nullable=False)
 user_id = Column(Integer, ForeignKey('users.id'))
 user = relationship(User, backref=backref('addresses', order_by=id))

 class Thinker(User):
 thought = 'Thoughts are not to be persited'

 e = create_engine('sqlite:///', echo=True)
 Base.metadata.bind = e
 Base.metadata.create_all()

 t = Thinker(name='Descartes')
 s = Session(bind=e)
 s.add(t)
 s.commit() # no problem
 a = Address(user=t, email='...@gmail.com javascript:')
 a = Address(user=t, email_addre...@gmail.com javascript:')
 s.commit() # FlushError

 Thanks,

 V.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inheriting from mapped classes

2014-06-24 Thread Victor Olex
What I aiming for is to provide users a library of base class(es), which 
are mapped using SQLAlchemy. The classes are then meant to be extended by 
users with business logic. I am not hell bent on using inheritance for 
this, but for now I went with your __abstract__ = True solution only in a 
somewhat inverted way. I had to use @declared_attr decorators to wrap 
around relationships in that class. I also had to move back-refed 
relationship definitions from referencing classes to here because the name 
of the concrete class is not known. 

There are some problems with this approach. First, it implies that only 
some of the classes can be abstract or we may not be able to construct all 
relationships. Secondly, within the scope of one application there can be 
only one derived class per each abstract class or we will face issues with 
create_all wanting to create the users table again. Any ideas how to 
overcome that? Below is the modified example. 

Thank you very much for helping.

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import Session, relationship, backref

Base = declarative_base()

class User(Base):
__abstract__ = True
__tablename__ = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
name = Column(String(50))
fullname = Column(String(50))
password = Column(String(12))

@declared_attr
def addresses(self):
return relationship('Address', backref=backref('user', 
uselist=False))

class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String, nullable=False)
user_id = Column(Integer, ForeignKey('users.id'))

class Thinker(User):
thought = 'Thoughts are not to be persited'


e = create_engine('sqlite:///', echo=True)
Base.metadata.bind = e
Base.metadata.create_all()

t = Thinker(name='Descartes')
s = Session(bind=e)
s.add(t)
s.commit() # no problem
a = Address(user=t, email_address='decar...@gmail.com')
s.commit() # No more FlushError

On Tuesday, June 24, 2014 6:39:03 PM UTC-4, Michael Bayer wrote:

  
 On 6/24/14, 5:44 PM, Victor Olex wrote:
  
 So, what is the right idiom for building SQLAlchemy persistence into 
 classes that need to do more than just that i.e. have run-time state. I was 
 hoping that deriving from SQLAlchemy model classes, but that does not seem 
 to be it. Another option would be to encapsulate a model class within the 
 runtime class, but that way we need to wrap SQLAlchemy session and 
 queries functionality into some helper functions.
  
 Typically the class that has whatever methods and state that you want is 
 also mapped directly.  There's no need to have User and Thinker as 
 separate classes.

 if you truly don't want anything to do with persistence visibly present on 
 classes, that's what classical mapping using mapper() and Table was 
 designed for, or alternatively you can make a business-level class as 
 abstract, or as a mixin:

 class MyClass(Base):
 __abstract__ = True

def my_business_method(self):
   #...

 class MyMappedClass(MyClass):
# ...

 depends really on how you need the two roles of business logic and 
 persistence to be separate.  Putting them all together is obviously the 
 most simplistic but that's what most people do, unless you're trying to do 
 something more J2EE-ish.




  
 On Friday, May 30, 2014 2:11:53 PM UTC-4, Michael Bayer wrote: 

 yep… here’s the error: 

  sqlalchemy.orm.exc.FlushError: Attempting to flush an item of type 
 class '__main__.Thinker' as a member of collection Address.user. 
 Expected an object of type class '__main__.User' or a polymorphic 
 subclass of this type. If class '__main__.Thinker' is a subclass of 
 class '__main__.User', configure mapper Mapper|User|users to load this 
 subtype polymorphically, or set enable_typechecks=False to allow any 
 subtype to be accepted for flush. 
  
  
  enable_typechecks=False disables this check:

  user = relationship(User, enable_typechecks=False,
 backref=backref('addresses', order_by=id))
  
  it just means that later on, when you hit some_address.user, you may 
 get a User back, not a Thinker (or you will, if it hasn’t been expired.   
 you can’t rely on it being consistent).   If that’s OK, then set the flag - 
 it just wants to check that this is what you intend.

  
  
  
  On May 30, 2014, at 1:51 PM, Victor Olex victo...@vtenterprise.com 
 wrote:

  Hello all, long time no see... 

  Is it OK to create classes, which inherit from mapped classes, but are 
 not meant to be persited and how to do it as to avoid FlushError on related 
 classes?

  from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import Session, relationship, backref

  Base = declarative_base()

  class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, Sequence('user_id_seq'), primary_key

[sqlalchemy] inheriting from mapped classes

2014-05-30 Thread Victor Olex
Hello all, long time no see...

Is it OK to create classes, which inherit from mapped classes, but are not 
meant to be persited and how to do it as to avoid FlushError on related 
classes?

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session, relationship, backref

Base = declarative_base()

class User(Base):
__tablename__ = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
name = Column(String(50))
fullname = Column(String(50))
password = Column(String(12))

class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String, nullable=False)
user_id = Column(Integer, ForeignKey('users.id'))
user = relationship(User, backref=backref('addresses', order_by=id))

class Thinker(User):
thought = 'Thoughts are not to be persited'

e = create_engine('sqlite:///', echo=True)
Base.metadata.bind = e
Base.metadata.create_all()

t = Thinker(name='Descartes')
s = Session(bind=e)
s.add(t)
s.commit() # no problem
a = Address(user=t, email='decar...@gmail.com')
a = Address(user=t, email_address='decar...@gmail.com')
s.commit() # FlushError

Thanks,

V.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] oracle reflect with duplicated tables (schema casing)

2013-07-24 Thread Victor Olex
Hey Mike, thanks for chiming in. 

Popular tool Oracle SQL Develoepr generally produces DDL with quoted 
identifiers so in real life you will encounter a lot of situations where 
some tables were created using quoted and some unquoted as people work on 
maintaining the database. 

Using lowercase in SQLA will generally work unless somebody names a schema 
in mixed case i.e. Mike or all uppercase but containing 
non-alphanumerical characters i.e. MIKE  IKE (valid name). 

Putting the weird aside, I am still puzzled why would the foreign key in 
MIKE.customer in the above example be seen as coming from mike.address and 
not MIKE.address? Shouldn't the schema name be consistent and thus in this 
example, case-sensitive uppercase MIKE?

Thanks.

On Wednesday, July 24, 2013 8:02:51 PM UTC-4, Michael Bayer wrote:


 On Jul 24, 2013, at 7:34 PM, mdob mike.do...@gmail.com javascript: 
 wrote: 

  Hi, 
  
  I got into an interesting issue where I receive duplicated tables if I 
 use capital letters schema in reflect method. 

 you wouldn't want to do that unless the table were actually created using 
 a case-sensitive name, which is pretty unusual in Oracle.  If Oracle's own 
 system views show the names as ALL_UPPERCASE, that's a case-insensitive 
 name.  On the SQLAlchemy side, you should use all lower case names which 
 will be similarly treated as case insensitive.  Otherwise it will see a 
 case-insensitive and a locally case-sensitive name as different, leading to 
 the kinds of issues you're seeing. 

  
  
  Tables were created like: CREATE TABLE MIKE.CUSTOMER ... so they 
 should be case insensitive. 

 I see there are quotes here, but Oracle will still log these as case 
 insensitive (I just tried).   So use all lower case on the SQLAlchemy side. 


  
  What I found in sqlalchemy code is that table mike.address is mapped and 
 added to Base.metadata.tables dictionary when table MIKE.customer is being 
 mapped. I guess that's because mike.address parent table to MIKE.customer. 
 The thing is it's added lowercase. Next, MIKE.address is added in a normal 
 way. 

 it sees mike.address in two different ways.  One, as the table 
 MIKE.address, because you asked for the schema MIKE, and the other, as 
 the table mike.address, which is what MIKE.customer says it foreign 
 keys out to.   MIKE is not the same as mike, the former is 
 case-sensitive on SQLA's side. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: DBAPIError

2013-01-18 Thread Victor Olex
Have you every found the root cause for this? I am experiencing the same 
issue in one of our setups.

On Monday, February 15, 2010 11:47:46 AM UTC-5, fdelia wrote:

 hello,

 I developed an application that uses SQLAlchemy to write the records on 
 the server. I have freetds 0.82.6 on the server. Sometimes I get an error 
 like this:

 DBAPIError: (Error) ('08S01', '[08S01] [FreeTDS][SQL Server]Write to the 
 server failed (20006) (SQLExecDirectW)')

 The error is not reproducible even with many requests and a moment after 
 the application starts to run without problems.The query I'm doing when I 
 have this problem are very simple.

 Can someone give me some hints about this problem?

 Thanks,
 Federica D'Elia


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/5Q1xYYZhXpAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mapping without key

2012-06-06 Thread Victor Olex
With the understanding that we would loose the ability to properly
track the sate of a mapped object and ability to update or insert in
ORM and likely the ability to correctly use relationships as well -
how can one accomplish a mapper, which would work on tables (views)
without any key, which uniquely identify records in it?

The rationale for this question is to be able to be able to operate on
these tables in object (ORM) context and to join them in queries with
other normally mapped classes for reading purposes only. The ideal
solution would be perhaps a different Declarative Base class using a
modified mapper.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: mapping without key

2012-06-06 Thread Victor Olex
Thanks. Model that we work with has tables, which have no unique
constraints. Keys can be inferred from data contained specified in ORM
maping but there is no guarantee that this will always work because
data may change. Still one could argue a case where mapping such table
to a class has merit even if far removed from all the benefits of
SQLAlchemy ORM.

On Jun 6, 4:55 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 There's two variants to this question, and I can't tell which one you're 
 asking for.

 If the views in question do in fact have candidate keys, that is, columns 
 which uniquely identify a row, you just specify those either to the Table or 
 mapper() as the columns that uniquely identify the row.   They don't have to 
 be considered primary by the database in any formal way.

 If OTOH you have views which truly have duplicate rows and no candidate key 
 of any kind, the ORM won't do that.   As you probably know, the primary key 
 thing is more or less the spine of mapper() and Query, and there's really no 
 way the ORM could be refactored, without a great loss of stability and 
 performance, to make this requirement optional.

 If you're looking for duplicate rows to come back as individual objects, 
 Query() can be handed Table objects to load rows from, so a custom Query 
 subclass that wraps named tuples into objects could possibly approximate this 
 effect.

 On Jun 6, 2012, at 3:01 PM, Victor Olex wrote:







  With the understanding that we would loose the ability to properly
  track the sate of a mapped object and ability to update or insert in
  ORM and likely the ability to correctly use relationships as well -
  how can one accomplish a mapper, which would work on tables (views)
  without any key, which uniquely identify records in it?

  The rationale for this question is to be able to be able to operate on
  these tables in object (ORM) context and to join them in queries with
  other normally mapped classes for reading purposes only. The ideal
  solution would be perhaps a different Declarative Base class using a
  modified mapper.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: mapping without key

2012-06-06 Thread Victor Olex
To be clear this is not a feature request. I could use a hit how to
build a fake mapper like this if not compatible in certain cases.

On Jun 6, 5:52 pm, Victor Olex victor.o...@vtenterprise.com wrote:
 Thanks. Model that we work with has tables, which have no unique
 constraints. Keys can be inferred from data contained specified in ORM
 maping but there is no guarantee that this will always work because
 data may change. Still one could argue a case where mapping such table
 to a class has merit even if far removed from all the benefits of
 SQLAlchemy ORM.

 On Jun 6, 4:55 pm, Michael Bayer mike...@zzzcomputing.com wrote:







  There's two variants to this question, and I can't tell which one you're 
  asking for.

  If the views in question do in fact have candidate keys, that is, columns 
  which uniquely identify a row, you just specify those either to the Table 
  or mapper() as the columns that uniquely identify the row.   They don't 
  have to be considered primary by the database in any formal way.

  If OTOH you have views which truly have duplicate rows and no candidate key 
  of any kind, the ORM won't do that.   As you probably know, the primary key 
  thing is more or less the spine of mapper() and Query, and there's really 
  no way the ORM could be refactored, without a great loss of stability and 
  performance, to make this requirement optional.

  If you're looking for duplicate rows to come back as individual objects, 
  Query() can be handed Table objects to load rows from, so a custom Query 
  subclass that wraps named tuples into objects could possibly approximate 
  this effect.

  On Jun 6, 2012, at 3:01 PM, Victor Olex wrote:

   With the understanding that we would loose the ability to properly
   track the sate of a mapped object and ability to update or insert in
   ORM and likely the ability to correctly use relationships as well -
   how can one accomplish a mapper, which would work on tables (views)
   without any key, which uniquely identify records in it?

   The rationale for this question is to be able to be able to operate on
   these tables in object (ORM) context and to join them in queries with
   other normally mapped classes for reading purposes only. The ideal
   solution would be perhaps a different Declarative Base class using a
   modified mapper.

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalchemy@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: override char() handling

2012-02-08 Thread Victor Olex
Yeah, this will keep coming back as long as people work with schemas,
which use fixed length character types (CHAR, NCHAR), which is
probably for another 30 years. May I propose an enhancement to String
(and as a result to CHAR, NCHAR and Unicode) to take trim=False
keyword and effect a TRIM() function on the server side if set to
True?

class sqlalchemy.types.String(length=None, convert_unicode=False,
assert_unicode=None, unicode_error=None, _warn_on_bytestring=False,
trim=False)

Maybe type constructor is not the cleanest place from the SQLAlchemy
API perspective but would be convenient for users. I am also wondering
if this can be accomplished with a @compiles trick but don't know
how :-p

On Dec 25 2011, 11:58 am, Michael Bayer mike...@zzzcomputing.com
wrote:
 On Dec 24, 2011, at 10:52 PM, alex bodnaru wrote:



  hello friends,

  i'm happily using  sqlalchemy in a tg 2.1 project. with a legacy database, 
  thus
  my queries are in the form of dbsession.execute(sql).

 sql is.. a string ?    select() construct ?

  where should i hook astripof the data from achar(len) field?

 astripon the input side ?  output side ?

 if we're talking about textual statement and removing trailing chars 
 fromCHARin the result:

 class StripChar(TypeDecorator):
     impl =CHAR
     def process_result_value(self, value, dialect):
         if value is not None:
             value = value.rstrip()
         return value

 sql = text(sql, typemap={'char_col_one':StripChar, 'char_col_two':StripChar})
 execute(sql)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] using Events to reduce memory footprint?

2011-11-07 Thread Victor Olex
I am curious if Events (new feature as of 0.7) could be used to reduce
memory footprint in certain situations or is it better achieved with
other features such as lazy loading.

For example, consider a complex mapped object (containing at least one
related table). Using joinedload_all option, SQLAlchemy conveniently
produces complex object from the query but the entire object resides
in session for the duration of the processing, which could take a lot
of memory.

In situations where complex mapped objects can be output as soon as
they are retrieved from the database one could begin streaming the
output as soon as the containing object gets loaded. The full list of
contained objects also need not be kept in memory -- once a contained
object gets output it could be removed from the list.

I would appreciate your thoughts on this.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using Events to reduce memory footprint?

2011-11-07 Thread Victor Olex
Thanks. I didn't know about yield_per. If I understand correctly,
angles 1 and 3 share the same weakness in that object identities may
be different between chunks. For situations where object state will
not be changed that is not terribly important. Also it seems that in
the first scenario as long as the containing object (User) remains in
session it will retain its correct identity while the collection of
contained objects (addresses) will be inconsistent between chunks.
Again, fair enough for chunked output. Now if I can generalize this to
n-level of relationship traversal we could be onto something good.

On Nov 7, 1:36 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 7, 2011, at 7:14 AM, Victor Olex wrote:



  I am curious if Events (new feature as of 0.7) could be used to reduce
  memory footprint in certain situations or is it better achieved with
  other features such as lazy loading.

  For example, consider a complex mapped object (containing at least one
  related table). Using joinedload_all option, SQLAlchemy conveniently
  produces complex object from the query but the entire object resides
  in session for the duration of the processing, which could take a lot
  of memory.

  In situations where complex mapped objects can be output as soon as
  they are retrieved from the database one could begin streaming the
  output as soon as the containing object gets loaded. The full list of
  contained objects also need not be kept in memory -- once a contained
  object gets output it could be removed from the list.

  I would appreciate your thoughts on this.

 Well there's a few angles to this so far.   The Query can be instructed to 
 yield rows in chunks as they come in using yield_per().   Though when using 
 joinedload() for collections, it's pretty much guaranteed to produce 
 incorrect results:

 Rows 1-3, user / address:

 userid=1,  name='user',  address_id=1, email='a1'
 userid=1,  name='user',  address_id=2, email='a2'
 userid=1,  name='user',  address_id=3, email='a3'

 now suppose it yields.  You get back a User() object, with an addresses 
 collection of three elements.

 Next iteration, there's two more addresses.   So the collection you just got, 
 and potentially operated upon, was wrong.

 userid=1,  name='user',  address_id=4, email='a4'
 userid=1,  name='user',  address_id=5, email='a5'

 So two more User rows - this means you'd get back either the same User object 
 or another one, depending on if the previous one is still hanging around, and 
 a different addresses collection - also blowing away any changes you might 
 have made to .addresses.   So the data consistency here is totally broken.  
 This is why yield_per() says it can't be used safely with joinedload().

 In the subqueryload use case, yield_per() is mostly useless, as all related 
 collections are loaded at once.   If Users 1, 2, and 3 are yielded, the 
 addresses collection will proceed to load all the addresses for all users 
 in the result, so you still spend time loading a record set at least as large 
 as the whole thing before getting the first batch.

 Next angle, use events.    The use case here would be, I don't really care 
 about getting the result, I'd like to just intercept objects as they come in. 
  You can do that using a handful of different events, including 
 append_result, populate_instance, and probably most usefully 
 load():http://www.sqlalchemy.org/docs/orm/events.html#sqlalchemy.orm.events.
     If you then turn on yield_per() the results will be chunked and not 
 stored in memory, but you'd still need to pretty much not access any 
 collections (or at least, not rely on them remaining consistent and not 
 mutating them) since they will be inconsistently populated.



  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: UnicodeEncodeError on saving to MySQL

2011-09-29 Thread Victor Olex
I am of the opinion that unless you are absolutely certain that the
data will be pure ASCII you should declare string fields in model as
Unicode type rather than String. Also have a look at 
http://farmdev.com/talks/unicode/
because it will help you understand what the dreaded error really
means.


On Sep 29, 9:06 am, Benjamin Sims benjamins...@gmail.com wrote:
 File scraper.py, line 77, in run
     session.commit()
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 617, in commit
     self.transaction.commit()
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 293, in commit
     self._prepare_impl()
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 277, in _prepare_impl
     self.session.flush()
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 1493, in flush
     self._flush(objects)
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 1562, in _flush
     flush_context.execute()
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py,
 line 327, in execute
     rec.execute(self)
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py,
 line 471, in execute
     uow
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/mapper.py,
 line 2107, in _save_obj
     execute(statement, params)
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1399, in execute
     params)
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1532, in _execute_clauseelement
     compiled_sql, distilled_params
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1633, in _execute_context
     context)
   File
 /home/benjamin/test/test1/lib/python2.6/site-packages/SQLAlchemy-0.7.2-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py,
 line 325, in do_execute
     cursor.execute(statement, parameters)
   File /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py, line 158, in
 execute
     query = query % db.literal(args)
   File /usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line
 265, in literal
     return self.escape(o, self.encoders)
   File /usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line
 198, in string_literal
     return db.string_literal(obj)
 UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3:
 ordinal not in range(128)

 On 29 September 2011 13:59, Michael Bayer mike...@zzzcomputing.com wrote:

  Can you attach a traceback for that?

  Sent from my iPhone

  On Sep 29, 2011, at 7:45 AM, Benjamin Sims benjamins...@gmail.com wrote:

  I've recently moved from SQLite to MySQL as the underlying database for my
  application. The move is causing me various problems, principally around
  Unicode. I *think* they were not occurring when SQLite was there, so I
  thought I would ask if anybody on the list has had similar problems.

  My current difficultly is that when saving content (Chinese, generally) to
  the database, I get the dreaded:

  *** UnicodeEncodeError: 'ascii' codec can't encode characters in position
  0-4: ordinal not in range(128)

  error and a rollback.

  I have checked that:

  - the MySQL table has a suitable encoding (CHARACTER SET utf8 COLLATE
  utf8_general_ci;)
  - the MySQL connection string has the charset on the end (?charset=utf8)
  - The type for the relevant Column is Unicode (this worked previously with
  just string, but I have changed it anyway just in case)
  - The content is unicode at the point it gets sent to the the database
  (again, this was a plain string previously but seemed to work)

  However, on save I get the above error from SQLAlchemy. What do I need to
  be doing to get the content in the right format to send through?

  Thanks,
  Ben

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

   --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To 

[sqlalchemy] SQLAchemy architecture in slides

2011-09-22 Thread Victor Olex
It is worth mentioning that Mike has published slides from his
PyGotham talk on SQLAlchemy architecture, which also covered certain
internal algorithms.

http://techspot.zzzeek.org/2011/09/16/sqlalchemy-at-pygotham/

I am looking forward to watching the video, which should help me get a
grasp of the unit of work dependency resolution algorithm.

Thanks,

Victor
http://linkedin.com/in/victorolex
http://twitter.com/in/agilevic

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-20 Thread Victor Olex
The ODBC on Linux has always been flaky but somehow I always managed
to get it to work well enough. I don't have OSX to try but I could do
some testing on more recent versions of Linux. What distro are you
struggling with? I am not sure how else I can help. Time is short for
most.

On Sep 18, 3:42 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 I've committed the patch that allows unicode binds in 0.91.   I've also added 
 a CAST() to the tablename column when we query the information schema, to 
 work around the nvarchar/ntext issue that for utterly mysterious reasons your 
 setup does not get (I get it on both linux and OSX).

 At this point I can barely run FreeTDS on OSX at all as my 0.82 build is 
 hosed and it no longer builds - I can use the binaries though.   0.91 
 continues to be plagued with MemoryError and completely broken encoding on 
 OSX, and I also get many core dumps on Linux - more tickets have been posted 
 to Pyodbc with much more detail including ODBC and FreeTDS logfiles.   I can 
 get basic unicode and ORM tests running with 0.91 and Linux, though.

 I've wasted many days trying to get all of these libraries to work 
 predictably and I'm out of time.    Overall the status of ODBC on unix is 
 extremely concerning to me as there doesn't seem to be anyone around to help 
 with anything.

 On Sep 12, 2011, at 3:27 PM, Victor Olex wrote:

  I confirm successful build and run time on both 64-bit and 32-bit
  Linux (RHEL5) againt SQL Server 2008.

  It has turned out that Pyodbc needed the CPLUS_INCLUDE_PATH set to
  where my version of UnixODBC was prefixed to prior to building. Best
  bet is to remove all prior versions including distro packaged. If you
  don't the Pyodbc may pickup header sources from there. By the way
  their setup.py seems to have issues and can leave behind certain files
  despite clean command. Also mind any leftovers in site-packages.

  Playing with compiler settings for the unixODBC, namely various
  combinations of the CPPFLAGS=-DBUILD_LEGACY_64_BIT_MODE -
  DSIZEOF_LONG_INT=8 allowed me to align the libs to the Pyodbc such
  that the segfault got eliminated but in the end the flags proved
  unneccesary once the above mentioned include path was properly set.

  On Sep 9, 10:54 pm, Victor Olex victor.o...@vtenterprise.com wrote:
  Pyodbc has changes as late as this month. Misery likes company. When I
  tried to install this same stack on 64 bit architecture I am getting
  Segmentation fault errors for some queries. Easysoft has some info on
  ODBC in 64bit 
  athttp://www.easysoft.com/developer/interfaces/odbc/64-bit.html
  There are some variables that can be set wrt to word length when
  building unixODBC. Hopefully I can find the right setup.

  On Sep 9, 1:47 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  2.1.9 from the github link you showed me.   Just to make sure I also just 
  cloned the tip from code.google.com and tried that one too.

  Has pyodbc made adjustments to changes in FreeTDS 0.91 ?

  On Sep 9, 2011, at 1:15 PM, Victor Olex wrote:

  What version of pyodbc?

  On Sep 9, 11:08 am, Michael Bayer mike...@zzzcomputing.com wrote:
  On Sep 9, 2011, at 10:48 AM, Michael Bayer wrote:

  It also makes me less than comfortable unconditionally emitting a u'' 
  for a bound parameter as it appears to cause problems.

  I've also checked my SQL server, all databases including master are 
  configured at SQL Server 2008 compatibility...

  --
  You received this message because you are subscribed to the Google 
  Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] different behavior with schema qualified tables in sqlite since 0.7.x

2011-09-16 Thread Victor Olex
Something's changed with sqlite since 0.6. Once a sqlite engine is
created to a physical file the usual workaround for schema change does
not work. It works correctly on sqlite in memory and even when
reattaching from memory to a file.

from sqlalchemy import Column, Sequence, create_engine, Integer
from sqlalchemy.ext.declarative import declarative_base
import os

Base = declarative_base()
metadata = Base.metadata

class A(Base):
__tablename__ = 'A'
__table_args__ = {'schema':'S'}
id = Column(u'ID', Integer, Sequence('A_PK'), primary_key=True)

e = create_engine('sqlite://', echo=True)
e.execute(attach database 'test.db' as S;)
metadata.bind = e
metadata.create_all() # all good

e = create_engine('sqlite:///test.db', echo=True)
e.execute(attach database 'test.db' as S;)
metadata.bind = e
metadata.create_all() # OperationalError: (OperationalError) unknown
database S 'PRAGMA S.table_info(A)' ()
os.remove('test.db')

# the work-around I came up with
e = create_engine('sqlite://', echo=True)
e.execute(attach database 'test.db' as S;)
metadata.bind = e
metadata.create_all() # works fine again, tables were created in file

Previously (ver. 0.6), the OperationalError would not occur. Neither
SQLite nor pysqlite version have changed.

--
Victor Olex
http://linkedin.com/in/victorolex
http://twitter.com/agilevic

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-12 Thread Victor Olex
I confirm successful build and run time on both 64-bit and 32-bit
Linux (RHEL5) againt SQL Server 2008.

It has turned out that Pyodbc needed the CPLUS_INCLUDE_PATH set to
where my version of UnixODBC was prefixed to prior to building. Best
bet is to remove all prior versions including distro packaged. If you
don't the Pyodbc may pickup header sources from there. By the way
their setup.py seems to have issues and can leave behind certain files
despite clean command. Also mind any leftovers in site-packages.

Playing with compiler settings for the unixODBC, namely various
combinations of the CPPFLAGS=-DBUILD_LEGACY_64_BIT_MODE -
DSIZEOF_LONG_INT=8 allowed me to align the libs to the Pyodbc such
that the segfault got eliminated but in the end the flags proved
unneccesary once the above mentioned include path was properly set.

On Sep 9, 10:54 pm, Victor Olex victor.o...@vtenterprise.com wrote:
 Pyodbc has changes as late as this month. Misery likes company. When I
 tried to install this same stack on 64 bit architecture I am getting
 Segmentation fault errors for some queries. Easysoft has some info on
 ODBC in 64bit athttp://www.easysoft.com/developer/interfaces/odbc/64-bit.html
 There are some variables that can be set wrt to word length when
 building unixODBC. Hopefully I can find the right setup.

 On Sep 9, 1:47 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  2.1.9 from the github link you showed me.   Just to make sure I also just 
  cloned the tip from code.google.com and tried that one too.

  Has pyodbc made adjustments to changes in FreeTDS 0.91 ?

  On Sep 9, 2011, at 1:15 PM, Victor Olex wrote:

   What version of pyodbc?

   On Sep 9, 11:08 am, Michael Bayer mike...@zzzcomputing.com wrote:
   On Sep 9, 2011, at 10:48 AM, Michael Bayer wrote:

   It also makes me less than comfortable unconditionally emitting a u'' 
   for a bound parameter as it appears to cause problems.

   I've also checked my SQL server, all databases including master are 
   configured at SQL Server 2008 compatibility...

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalchemy@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-09 Thread Victor Olex
What version of pyodbc?

On Sep 9, 11:08 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 9, 2011, at 10:48 AM, Michael Bayer wrote:



  It also makes me less than comfortable unconditionally emitting a u'' for a 
  bound parameter as it appears to cause problems.

 I've also checked my SQL server, all databases including master are 
 configured at SQL Server 2008 compatibility...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-09 Thread Victor Olex
Pyodbc has changes as late as this month. Misery likes company. When I
tried to install this same stack on 64 bit architecture I am getting
Segmentation fault errors for some queries. Easysoft has some info on
ODBC in 64bit at http://www.easysoft.com/developer/interfaces/odbc/64-bit.html
There are some variables that can be set wrt to word length when
building unixODBC. Hopefully I can find the right setup.

On Sep 9, 1:47 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 2.1.9 from the github link you showed me.   Just to make sure I also just 
 cloned the tip from code.google.com and tried that one too.

 Has pyodbc made adjustments to changes in FreeTDS 0.91 ?

 On Sep 9, 2011, at 1:15 PM, Victor Olex wrote:

  What version of pyodbc?

  On Sep 9, 11:08 am, Michael Bayer mike...@zzzcomputing.com wrote:
  On Sep 9, 2011, at 10:48 AM, Michael Bayer wrote:

  It also makes me less than comfortable unconditionally emitting a u'' for 
  a bound parameter as it appears to cause problems.

  I've also checked my SQL server, all databases including master are 
  configured at SQL Server 2008 compatibility...

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
I never for a moment thought that your change was thoughtless. To the
contrary, I have huge respect for SQLAlchemy. I will try to test the
drop_all and your pyodbc issue with my setup and to report here later
today.

Meanwhile, I can tell you how to build the stack because it is a bit
tricky given certain package issues. I chose to build unixODBC and
FreeTDS and pyodbc from sources into /usr/local.

1. unixODBC

First I removed system packages unixODBC and unixODBC-dev (names may
vary by Linux distro). Then

wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.0.tar.gz
tar zxvf unixODBC-2.3.0.tar.gz
cd unixODBC-2.3.0
./configure
make
sudo make install

As root ensure that /usr/local/etc/odbcinst.ini has the following
entries:

[FreeTDS]
Driver  = /usr/local/lib/libtdsodbc.so

[SQLServer]
Driver  = /usr/local/lib/libtdsodbc.so

The second entry is for aesthetics only - so you can use SQLServer
in your connection strings. Odbc.ini file is not important if you use
fully qualified host names in your connection string but at your
option you may configure DSNs there.

2. FreeTDS

Also uninstall any system packages that you may have for this, then:

wget http://www.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
tar xvf freetds-stable.tgz
cd freetds-0.91
./configure --with-unixodbc=/usr/local  --enable-msdblib --with-
tdsver=8.0
make
sudo make install

Remove or otherwise disable /etc/freetds.conf if any. Modify /usr/
local/etc/freetds.conf to read as follows:

[global]
# TDS protocol version
tds version = 8.0
client charset = UTF-8
# This is probably not needed...
text size = 64512

# Typical SQLServer
[server_name]
host = hostname.somedomain
port = 2431
tds version = 8.0

At this point you should be able to login successfully using:

tsql -S server_name -U username -P password

3. pyodbc

Pyodbc package on pypi is currently broken in that it is missing a
utils and web folders from sources and does not build. There is a bug
for this (with my comments too) at 
http://code.google.com/p/pyodbc/issues/detail?id=192.
There are two ways around it but before you start (and at runtime)
make sure that /usr/local/lib is in LD_LIBRARY_PATH (or add it
permanently system-wide using ldconfig).

export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH

3.1 Quick way

pip install https://github.com/mkleehammer/pyodbc/zipball/2.1.9

This will result in pyodbc-2.1.0-unsupported out of 2.1.9 code base
installed, which looks ugly and may potentially confuse other
packages, which check its version. But it works fine. The reason for
this is that the github code version does not hat PKG-INFO file.

3.2 Longer way, which I followed

3.2.1 Get official package and git sources

wget http://pyodbc.googlecode.com/files/pyodbc-2.1.9.zip
unzip pyodbc-2.1.9.zip
wget --no-check-certificate -O git-pyodbc-2.1.9.zip
https://github.com/mkleehammer/pyodbc/zipball/2.1.9
unzip git-pyodbc-2.1.9.zip

3.2.2 Copy missing utils and web folders to the packaged version

cp -R mkleehammer-pyodbc-e3c95dc/utils/ pyodbc-2.1.9/
cp -R mkleehammer-pyodbc-e3c95dc/web/ pyodbc-2.1.9/

3.2.3 Build and install

python setup.py bdist_egg
cd dist
easy_install pyodbc-2.1.9-py2.6-linux-i686.egg # filename may vary
based on architecture

At run time be sure to have the LD_LIBRARY_PATH and TDSVER=8.0
variables set. The latter is not needed if you put the same into
connection string as I have in the example above.

I hope this helps.

Victor Olex
http://linkedin.com/in/victorolex
http://twitter.com/agilevic




On Sep 7, 11:53 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 The ticket for SQLAlchemy is:

 http://www.sqlalchemy.org/trac/ticket/2273

 For Pyodbc I've opened:

 http://code.google.com/p/pyodbc/issues/detail?id=209http://code.google.com/p/pyodbc/issues/detail?id=210

 as you can see, issue 210 is quite serious.    Would be curious what results 
 you get for the script there.

 On Sep 7, 2011, at 11:25 PM, Michael Bayer wrote:

  I can't actually make that string work at all with FreeTDS, but I am on 
  0.82.   If I turn on Python unicodes with FreeTDS 0.82, which until 
  recently was the FreeTDS release for years, everything breaks immediately - 
  the CREATE TABLE statements won't even work, as you can see below just the 
  strings u'A', u'dbo' blow it up:

  sqlalchemy.exc.DBAPIError: (Error) ('HY004', '[HY004] [FreeTDS][SQL 
  Server]Invalid data type (0) (SQLBindParameter)') 'SELECT 
  [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], 
  [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], 
  [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].[ORDINAL_POSITION], 
  [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION], 
  [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], 
  [COLUMNS_1].[COLLATION_NAME] \nFROM [INFORMATION_SCHEMA].[COLUMNS] AS 
  [COLUMNS_1] \nWHERE [COLUMNS_1].[TABLE_NAME] = ? AND 
  [COLUMNS_1].[TABLE_SCHEMA] = ?' (u'A', u'dbo

[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
Pyodbc issue 209 works fine in my setup. I think the key thing is
matching SQL Server version with the correct TDS protocol version and
correct FreeTDS version. Also with regards to your Mac testing, check
if you have the libiconv installed and that FreeTDS is built with it.
http://www.freetds.org/userguide/config.htm

On Sep 8, 10:32 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:

  I never for a moment thought that your change was thoughtless. To the
  contrary, I have huge respect for SQLAlchemy. I will try to test the
  drop_all and your pyodbc issue with my setup and to report here later
  today.

 thanks !     Unfortunately I've tested this some more and things are looking 
 very, very bad.    For us to support 0.91, we'd need to figure out how to get 
 all of our table exists functions to work.   If you look 
 athttp://www.sqlalchemy.org/trac/ticket/2273, I've now added a patch that 
 detects 0.82 vs. 0.91 and sets the flag, but you can see that we can't send 
 u'' strings when we query INFORMATION_SCHEMA still - literally, the number of 
 characters present in one of the bind parameters changes the behavior.   So 
 there is something very strange and arbitrary (seems basically like it's just 
 making guesses about datatypes) going on with the internals of FreeTDS, and 
 I'm not optimistic about being able to get clear answers from their list.    

 Would you have any resources to evaluate the test cases on that ticket , both 
 are now against pure PyODBC 2.1.9?    Without being able to query information 
 schema, none of our unit tests can run period with 0.91 - I need a reliable 
 way to do so, hopefully without losing support for table names that contain 
 non-ascii characters.    A lot of adjustments to the MSSQL dialect and 
 testing will be needed.    

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:20:35,608 INFO sqlalchemy.engine.base.Engine (u'B',
u'CMBS')
2011-09-08 12:20:35,612 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:20:35,612 INFO sqlalchemy.engine.base.Engine (u'A',
u'dbo')

In [6]: speed.metadata.create_all()
2011-09-08 12:20:50,391 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:20:50,391 INFO sqlalchemy.engine.base.Engine (u'A',
u'dbo')
2011-09-08 12:20:50,395 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:20:50,395 INFO sqlalchemy.engine.base.Engine (u'B',
u'CMBS')
2011-09-08 12:20:50,397 INFO sqlalchemy.engine.base.Engine
CREATE TABLE [A] (
[ID] INTEGER NOT NULL IDENTITY(1,1),
col1 NTEXT NULL,
col2 VARCHAR(255) NULL,
PRIMARY KEY ([ID])
)


2011-09-08 12:20:50,397 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:20:50,487 INFO sqlalchemy.engine.base.Engine COMMIT
2011-09-08 12:20:50,489 INFO sqlalchemy.engine.base.Engine
CREATE TABLE [CMBS].[B] (
[ID] INTEGER NOT NULL IDENTITY(1,1),
col1 NTEXT NULL,
PRIMARY KEY ([ID])
)


2011-09-08 12:20:50,490 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:20:50,514 INFO sqlalchemy.engine.base.Engine COMMIT


On Sep 8, 11:51 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 8, 2011, at 11:37 AM, Victor Olex wrote:

  Pyodbc issue 209 works fine in my setup.

 that is very strange ?   There are files missing from the .zip.  If you 
 installed from the zip I don't see how it built for you.  Here's the original 
 issue:

 http://code.google.com/p/pyodbc/issues/detail?id=192

  I think the key thing is
  matching SQL Server version with the correct TDS protocol version and
  correct FreeTDS version.

 I use tds version 8.0 for this particular DSN and that is working fine in 
 production with FreeTDS 0.82, on both linux and OSX platforms.   These issues 
 have all been introduced with FreeTDS 0.91.   Here, I tried 7.2 and got 
 slightly better results, though unicode round trips still fail when Python 
 unicodes are passed.    PyODBC still dies with MemoryError if I attempt to 
 query for a table that already exists.

  Also with regards to your Mac testing, check
  if you have the libiconv installed and that FreeTDS is built with it.
 http://www.freetds.org/userguide/config.htm

 yup that's in my configure:

 checking for iconv... yes
 checking how to link with libiconv... -liconv
 checking for iconv declaration... install-shextern size_t iconv (iconv_t cd, 
 char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft);



  On Sep 8, 10:32 am, Michael Bayer mike...@zzzcomputing.com wrote:
  On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:

  I never for a moment thought that your change was thoughtless. To the
  contrary, I have huge respect for SQLAlchemy. I will try to test the
  drop_all and your pyodbc issue with my setup and to report here later
  today.

  thanks !     Unfortunately I've tested this some more and things are 
  looking very, very bad.    For us to support 0.91, we'd need to figure out 
  how to get all of our table exists functions to work.   If you look 
  athttp://www.sqlalchemy.org/trac/ticket/2273, I've now added a patch that 
  detects 0.82 vs. 0.91 and sets the flag, but you can see that we can't 
  send u'' strings when we query INFORMATION_SCHEMA still - literally, the 
  number of characters present in one of the bind parameters changes the 
  behavior.   So there is something very strange and arbitrary (seems

[sqlalchemy] Re: Question on session.expunge.all()

2011-09-08 Thread Victor Olex
Since you are effectively overwriting the table with new file
contents, the fastest may well be to truncate the table then insert
all contents. If you were to just append and update then
session.merge() is convenient way to do this though I am not sure if
the fastest.

On Sep 7, 5:53 pm, Vlad K. v...@haronmedia.com wrote:
 Great, thanks!

 .oO V Oo.

 On 09/06/2011 04:48 PM, Michael Bayer wrote:

  On Sep 6, 2011, at 10:40 AM, Vlad K. wrote:

  I have a products database which is daily syncronized with an external 
  source via a csv file. There are several thousand rows in question. The 
  synchronization does two things:

  1. Update only price if changed for existing products
  2. Insert new products if they don't exist with all fields from csv

  But basically, for each row in the csv, after the row is processed (one of 
  the above two things is done), I don't need the object in session anymore. 
  Memory and performance are of course an issue, and I can't find a way to 
  test memory consumption with or without expunge_all() so my questions are:

  1. Do I need to session.expunge_all() after each csv row is processed, or 
  are they automatically garbage collected?
  2. Is there any significant overhead inherent in expunge_all() that I'm 
  not seeing right now?

  Performance-wise, it seems the task is complete in more or less same time 
  with or without expunge_all()
  In modern SQLAlchemy, the Session maintains only weak references to objects 
  that are clean, that is, are persistent in the database and have no 
  pending changes to be flushed.    As all references to them are lost, they 
  are garbage collected by the Python interpreter.    Note that objects are 
  strongly referenced when they are present in the collection or attribute of 
  a parent object, until that parent is also garbage collected.    There is 
  an overhead to process which occurs when the object is dereferenced and 
  removed from the session (weakref callbacks handle the accounting).  But 
  calling expunge_all() probably isn't doing much here as the objects are 
  likely being cleaned out in the same way regardless.

  While I'm at it, I also need to delete rows in the database that do not 
  have corresponding row in the csv file (say linked by csv_key field), the 
  first solution that comes to mind is building a list of keys in the csv 
  file (few thousand keys) and then doing:

  session.query(Product).filter(not_(Product.product_id.in_(csv_keys))).delete()

  I believe there is less overhead in sending such a large (but single!) 
  query to the database and leaving it to determine what to delete by 
  itself, than selecting each row in the database and checking if its 
  csv_key exists in the csv_keys list on the application side and then 
  issuing delete statements for rows that matched the criteria. Am I wrong?
  That's definitely a dramatically faster way to do things, rather than to 
  load each record individually and mark as deleted - it's the primary reason 
  delete() and update() are there.   You'll probably want to send False as 
  the value of synchronize_session to the delete() call so that it doesn't go 
  through the effort of locating local records that were affected (unless you 
  need that feature).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
 mouton! ')
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 being returned to pool
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 checked out from pool
sqlalchemy.engine.base.Engine: INFO: INSERT INTO unicode_err_table
(sort, plain_varchar_no_coding_error) VALUES (?, ?)
sqlalchemy.engine.base.Engine: INFO: (2, u'Alors vous imaginez ma
surprise, au lever du jour, quand une dr\xf4le de petite voix m\u2019a
r\xe9veill\xe9. Elle disait: \xab S\u2019il vous pla\xeet\u2026
dessine-moi un mouton! \xbb')
sqlalchemy.engine.base.Engine: INFO: COMMIT
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 being returned to pool
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 checked out from pool
sqlalchemy.engine.base.Engine: INFO: SELECT unicode_err_table.sort,
unicode_err_table.plain_varchar_no_coding_error
FROM unicode_err_table ORDER BY unicode_err_table.sort
sqlalchemy.engine.base.Engine: INFO: ()
sqlalchemy.engine.base.Engine: DEBUG: Col ('sort',
'plain_varchar_no_coding_error')
sqlalchemy.engine.base.Engine: DEBUG: Row (1, 'Alors vous imaginez ma
surprise, au lever du jour, quand une drle de petite voix ma rveill.
Elle disait:  Sil vous plat dessine-moi un mouton! ')
sqlalchemy.engine.base.Engine: DEBUG: Row (2, 'Alors vous imaginez ma
surprise, au lever du jour, quand une dr\xf4le de petite voix m?a r
\xe9veill\xe9. Elle disait: \xab S?il vous pla\xeet? dessine-moi un
mouton! ')
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 being returned to pool
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 checked out from pool
sqlalchemy.engine.base.Engine: INFO: SELECT [COLUMNS_1].
[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME],
[COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
[ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
[COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
[COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
sqlalchemy.engine.base.Engine: INFO: (u'unicode_err_table', u'dbo')
sqlalchemy.engine.base.Engine: DEBUG: Col ('TABLE_SCHEMA',
'TABLE_NAME', 'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE',
'ORDINAL_POSITION', 'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION',
'NUMERIC_SCALE', 'COLUMN_DEFAULT', 'COLLATION_NAME')
sqlalchemy.engine.base.Engine: DEBUG: Row (u'dbo',
u'unicode_err_table', u'sort', 'YES', u'int', 1, None, 10, 0, None,
None)
sqlalchemy.engine.base.Engine: INFO:
DROP TABLE unicode_err_table
sqlalchemy.engine.base.Engine: INFO: ()
sqlalchemy.engine.base.Engine: INFO: COMMIT
sqlalchemy.pool.QueuePool: DEBUG: Connection pyodbc.Connection object
at 0xa4490e0 being returned to pool
-  end captured logging  -

==
FAIL: test.sql.test_types.UnicodeTest.test_native_unicode
--
Traceback (most recent call last):
  File /opt/home/vo63573/satest/lib/python2.6/site-packages/nose/
case.py, line 197, in runTest
self.test(*self.arg)
  File /opt/home/vo63573/satest/src/sqlalchemy/./test/sql/
test_types.py, line 636, in test_native_unicode
testing.db.dialect.returns_unicode_strings)
AssertionError: name: mssql driver pyodbc
returns_unicode_strings=conditional

--
Ran 7 tests in 0.358s




On Sep 8, 12:59 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Hi Victor  -

 Since you're there, do you have any luck actually running unit tests ?   The 
 test in particular here is:

 ./sqla_nose.py  -v test.sql.test_types:UnicodeTest 
 --dburi=mssql+pyodbc://user:pass@dsn

 Also, on the Mac, iODBC is the standard ODBC product.  unixODBC can be built 
 though in the past I've found this to have problems - I'll see if I can look 
 into it again if I have time.

 I have no doubt that this works on linux so the path to my getting this patch 
 committed is to get everything installed on a linux VM, and getting the tests 
 to run as well as they did before at least on that platform.

 For OSX I'm not sure where that will lead - if we need to put a doc on the 
 site saying, you need to replace OSX's standard ODBC install, then that 
 will be that, but would need to check that all out first.

 Also sorry I missed that pyodbc ticket 192 was yours.    Disturbing that 
 there's no response from the maintainer ?

 On Sep 8, 2011, at 12:41 PM, Victor Olex wrote:

  I know of those issues with pyodbc package. Michael, please read my
  first response where I wrote how to build the unixODBC, FreeTDS and
  pyodbc stack. I gave this detail for a reason - i.e. that you can
  replicate my built.

  By the way I did

[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
Your're welcome. As for no response from pyodbc that is indeed sloppy
as is the fact that PyPi package does not work. Hats off to you for
always being responsive (afaik). I often wonder what keeps you so
motivated but that's off topic.

On Sep 8, 4:07 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 thanks, the two test failures would be expected in this case, will see what 
 response I get on the FreeTDS list if any.

 On Sep 8, 2011, at 3:07 PM, Victor Olex wrote:

  Unfortunately I don't have access to a blank database and I took the
  chance and ran your tests on a non-empty database. Tests are mostly
  good: 5/7 pass. You should know that I used current trunk and simply
  commented out the line, which resets the supports_unicode_binds but it
  should be equivalent in effect to your patch. Without the patch all
  tests fail. Below is pretty verbose result for you.

  I hope this helps,

  Victor

  ./sqla_nose.py -v test.sql.test_types:UnicodeTest --dburi=mssql
  +pyodbc://:xxx@xxx:2431/X?
  driver=SQLServerport=2431TDS_Version=8.0 --log-
  debug=sqlalchemy.orm.mapper --log-debug=sqlalchemy.pool --log-
  debug=sqlalchemy.engine
  DEBUG:sqlalchemy.pool.QueuePool:Created new connection
  pyodbc.Connection object at 0xa3fdfa0
  INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
  INFO:sqlalchemy.engine.base.Engine:()
  DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
  DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
  INFO:sqlalchemy.engine.base.Engine:
             SELECT default_schema_name FROM
             sys.database_principals
             WHERE name = ?
             AND type = 'S'

  INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
  DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
  DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 checked out from pool
  INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
  [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
  [IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
  [ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
  [COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
  [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
  FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
  WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
  INFO:sqlalchemy.engine.base.Engine:(u'unicode_table', u'dbo')
  DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
  'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'ORDINAL_POSITION',
  'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION', 'NUMERIC_SCALE',
  'COLUMN_DEFAULT', 'COLLATION_NAME')
  INFO:sqlalchemy.engine.base.Engine:
  CREATE TABLE unicode_table (
         id INTEGER NOT NULL IDENTITY(1,1),
         unicode_varchar NVARCHAR(250) NULL,
         unicode_text NTEXT NULL,
         PRIMARY KEY (id)
  )

  INFO:sqlalchemy.engine.base.Engine:()
  INFO:sqlalchemy.engine.base.Engine:COMMIT
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 being returned to pool
  test.sql.test_types.UnicodeTest.test_blank_strings ...
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 checked out from pool
  INFO:sqlalchemy.engine.base.Engine:INSERT INTO unicode_table
  (unicode_varchar) OUTPUT inserted.id VALUES (?)
  INFO:sqlalchemy.engine.base.Engine:(u'',)
  DEBUG:sqlalchemy.engine.base.Engine:Col ('id',)
  DEBUG:sqlalchemy.engine.base.Engine:Row (1, )
  INFO:sqlalchemy.engine.base.Engine:COMMIT
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 being returned to pool
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 checked out from pool
  INFO:sqlalchemy.engine.base.Engine:SELECT
  unicode_table.unicode_varchar
  FROM unicode_table
  INFO:sqlalchemy.engine.base.Engine:()
  DEBUG:sqlalchemy.engine.base.Engine:Col ('unicode_varchar',)
  DEBUG:sqlalchemy.engine.base.Engine:Row (u'', )
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 being returned to pool
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 checked out from pool
  INFO:sqlalchemy.engine.base.Engine:DELETE FROM unicode_table
  INFO:sqlalchemy.engine.base.Engine:()
  INFO:sqlalchemy.engine.base.Engine:COMMIT
  DEBUG:sqlalchemy.pool.QueuePool:Connection pyodbc.Connection object
  at 0xa3fdfa0 being returned to pool
  ok
  test.sql.test_types.UnicodeTest.test_ignoring_unicode_error ...
  DEBUG:sqlalchemy.pool.QueuePool:Created new connection
  pyodbc.Connection object at 0xa4490e0
  INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
  INFO:sqlalchemy.engine.base.Engine:()
  DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
  DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
  INFO:sqlalchemy.engine.base.Engine

[sqlalchemy] PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-07 Thread Victor Olex
Using SQLAlchemy 0.7.2 with pyodbc 2.1.9, FreeTDS 0.91, unixODBC 2.3.0
and SQL Server 2008 I find that the supports_unicode_bind may be
incorrectly set to False in the PyODBCConnector.initialize. As a
result a unicode parameter gets encoded as str and to make matters
worse the value gets silently overridden with empty Unicode string
(u'').

Consider a simple table (IDENTITY, NTEXT, VARCHAR(255)) with one
record:
ID, col1, col2
1, 'Łódź', 'abc'.

We will update existing value in col1 to 'Łódź!'.

 from sqlalchemy import Column, Sequence, create_engine
 from sqlalchemy.types import UnicodeText, Integer, VARCHAR
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()
 metadata = Base.metadata

 class A(Base):
... __tablename__ = 'A'
... id = Column(u'ID', Integer, Sequence('A_PK'),
primary_key=True)
... col1 = Column(u'col1', UnicodeText())
... col2 = Column(u'col2', VARCHAR(255))
...
 e = 
 create_engine('mssql://user:pwd@sqlserverhost:2431/MYDB?driver=FreeTDSTDS_Version=8.0',
  echo=True)
 Session=sessionmaker()
 s = Session(bind=e)
 lodz = u'\u0141\xf3d\u017a'
 oa = s.query(A).one()
2011-09-07 17:22:25,260 INFO sqlalchemy.engine.base.Engine SELECT
user_name() as user_name;
2011-09-07 17:22:25,261 INFO sqlalchemy.engine.base.Engine ()
2011-09-07 17:22:25,270 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE name = ?
AND type = 'S'

2011-09-07 17:22:25,271 INFO sqlalchemy.engine.base.Engine
(u'SPEED_IT',)
2011-09-07 17:22:25,291 INFO sqlalchemy.engine.base.Engine BEGIN
(implicit)
2011-09-07 17:22:25,292 INFO sqlalchemy.engine.base.Engine SELECT [A].
[ID]
AS [A_ID], [A].col1 AS [A_col1], [A].col2 AS [A_col2]
FROM [A]
2011-09-07 17:22:25,292 INFO sqlalchemy.engine.base.Engine ()
 oa.col1
u'\u0141\xf3d\u017a'
 oa.col2
'abc'
 oa.col1 = u'\u0141\xf3d\u017a!'
 s.commit()
2011-09-07 17:23:17,016 INFO sqlalchemy.engine.base.Engine UPDATE [A]
SET
col1=? WHERE [A].[ID] = ?
2011-09-07 17:23:17,016 INFO sqlalchemy.engine.base.Engine
('\xc5\x81\xc3\xb3d\xc5\xba!', 1)
2011-09-07 17:23:17,061 INFO sqlalchemy.engine.base.Engine COMMIT
 oa.col1
2011-09-07 17:23:24,226 INFO sqlalchemy.engine.base.Engine BEGIN
(implicit)
2011-09-07 17:23:24,227 INFO sqlalchemy.engine.base.Engine SELECT [A].
[ID]
AS [A_ID], [A].col1 AS [A_col1], [A].col2 AS [A_col2]
FROM [A]
WHERE [A].[ID] = ?
2011-09-07 17:23:24,227 INFO sqlalchemy.engine.base.Engine (1,)
u''

Using a patched initialize method with the supports_unicode_binds line
#110 removed the parameter gets passed as Unicode and the database
updates correctly as does the in memory object. Different version
combinations of pyodbc, FreeTDS and SQL may likely yield a different
result so unless a deterministic factor is found I would like to
propose adding parameter bind_unicode to dialect class and connection
url.

Regards and respect,

Victor Olex
http://linkedin.com/in/victorolex
http://twitter.com/agilevic

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: on padded character fields again

2010-09-15 Thread Victor Olex
You mean something like this:

import sqlalchemy.types as types

class CHAR(types.TypeDecorator):
'''Strips padding from CHAR types.
'''

impl = types.CHAR

def process_bind_param(self, value, dialect):
return value

def process_result_value(self, value, dialect):
return value.rstrip()

On Sep 14, 11:49 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 14, 2010, at 11:42 AM, Victor Olex wrote:



  We have discussed one aspect of this before and it was hugely helpful
  (http://groups.google.com/group/sqlalchemy/browse_thread/thread/
  965287c91b790b68/361e0a53d4100b5d?lnk=gstq=padding#361e0a53d4100b5d)

  This time I wanted to ask not about the WHERE clause but mapped object
  contents, where field is of padded type such as CHAR. Currently
  SQLAlchemy populates such fields consistently with what a raw SQL
  query would return for the database engine. In Oracle it would be with
  padding. I would like to suggest however that this behavior be
  parametrized. The reason being that the same code operating on objects
  retrieved from a mapped database may behave differently depending on
  the underlying engine.

  For example a field defined as follows:

  description = Column(u'desciption', CHAR(length=100), nullable=False)

  would return padded values when run on Oracle but on SQLite it would
  be trimmed to the string length.

  This behavior led to having to duplicate a lot of unit tests (SQLite)
  into functional test (Oracle) to avoid unpleasant surprises such as:

  myobj.description == some vaule

  behaving differently in each environment.

  One of the most important features of the ORM's is abstracting away
  the physical database store. Unless I missed something obvious this
  could be a room for improvement.

  By the way the mapping was reverse engineered from existing database.
  In forward engineering scenario one would probably use a generic type
  String instead, which would map to VARCHAR where the issue is non-
  existent.

 Well the first thing I'd note is that the CHAR type is not part of the ORM, 
 its the part of schema definition language.  The schema definition and SQL 
 expression languages attempt to strike a balance between backend-agnosticism 
 and literal DBAPI/database behavior.    

 The other thing is I'd ask is have you looked at TypeDecorator  
 (http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorato...), 
 is that adequate here or otherwise why not  ?   A real world ORM application 
 generally has a whole module dedicated to custom types that are tailored to 
 the application's needs.



  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] dealing with multiple databases when using declarative

2009-12-03 Thread Victor Olex
By database we refer to a separate instance (could be on another
server) of equvalent database i.e. production and development
environments.

Using sqlalchemy.ext.declarative, what would be a correct pattern to
enable one model to connect to two equivalent databases?

Consider:

--- model.py ---
Base = declarative_base()
metadata = Base.metadata

class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True, nullable=False)
description = Column(CHAR(length=100), nullable=False)

_Session = sessionmaker()

def make_session(uri):
metadata.bind = engine
session = Session(bind=engine)
return session

--- program.py --
import model

copy some data between databases
srcdb = model.make_session(srcdb_uri)
destdb = model.make_session(srcdb_uri)

items = srcdb.query(model.MyTable).all()

for i in items:
destdb.merge(i)

destdb.commit()

---

We have two distinct sessions bound to to different databases but
because metadata is a module level variable it gets rebound to the
database of last call.

Also I am somewhat unclear on the difference between metadata binding
to engine and session binding to engine, especially when they end up
different. What is relevance of MetaData binding?

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.