Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-28 Thread Alex Hall
That would certainly work. :) Would that offer any benefits over
pyodbc, since I wouldn't have the mapping (which was taking all the
time I was spending with SA)?

On 3/25/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/25/2016 05:20 PM, Alex Hall wrote:
>> Hi all,
>> Since SA was proving to be difficult to get working, and I was
>> spending way more time just trying to get it working than I was
>> actually running queries and outputting the results, I thought I'd
>> give Pyodbc a shot. Within a couple days, the query was working
>> perfectly. I'll post it below, as I'd be curious how this could be
>> made easier using SA.
>
>
> like this:
>
> from sqlalchemy import create_engine
>
> engine = create_engine("mssql+pyodbc://dsn")
>
> result = engine.execute(itemsQuery)
>
>
>
>
>
>   I don't know that I'll use SA for this project
>> since it's working so well in Pyodbc, but I'm curious all the same.
>> So, no hurry on this, it's only for my own interest. Anyway, the query
>> I finally came up with is below. It returns multiple rows for items
>> that have attributes, but other than that it works great. A for loop
>> with a check on the current ID takes care of making multiple rows for
>> the same item into a single row (recall this is all going to a
>> spreadsheet).
>>
>>
>> itemsQuery = """
>>   select items.itm_id as itemID, items.itm_proddesc as itemTitle,
>> items.itm_num as itemNumber, items.itm_listprice1 as msrp,
>> items.itm_dftuom as itemUnitOfMeasure, items.itm_manufitem as
>> itemManufacturer, items.itm_vendornum as itemVendorNumber,
>>items.itm_weight as itemWeight, items.itm_width as itemWidth,
>> items.itm_length as itemLength, items.itm_height as itemHeight,
>>attachments.description as description,
>>imagePaths.imagePath1 as imagePath1, imagePaths.imagePath2 as
>> imagePath2, imagePaths.imagePath3 as imagePath3,
>>attributes.attributeName as attributeName, attributes.attributeValue
>> as attributeValue,
>>vendor.vendorName as vendorName
>>from (
>> select itm_id, itm_proddesc, itm_num, itm_vendornum,
>> itm_listprice1, itm_length, itm_width, itm_height, itm_weight,
>> itm_dftuom, itm_manufitem
>> from item
>> where itm_webflag <> 'Y' and itm_suspflag <> 'Y'
>>) items
>>left outer join (
>> select distinct attr_desc as attributeName, attr_value as
>> attributeValue, itm_id
>> from attributevalueassign
>> join attribute
>> on attribute.attr_id = attributevalueassign.attr_id
>> join attributevalue
>> on attributevalue.attr_value_id = attributevalueassign.attr_value_id
>> where attributevalueassign.itm_id = itm_id
>>) attributes
>> on attributes.itm_id = items.itm_id
>>left outer join (
>> select PVUS15 as vendorName, PVVNNO as vendorNumber, itm_id
>> from VENDR
>> join item on item.itm_id = itm_id
>>) vendor
>> on vendor.vendorNumber = items.itm_vendornum and vendor.itm_id =
>> items.itm_id
>>left outer join (
>>   select attach_text.att_text as description, itm_id
>>from assignment
>> join attachment on attachment.att_id = assignment.att_id
>> join attach_text on attach_text.att_id = assignment.att_id
>> where assignment.itm_id = itm_id
>>) attachments
>>on attachments.itm_id = items.itm_id
>>left outer join (
>> select attachment.att_path as imagePath1, attachment.att_path2 as
>> imagePath2, attachment.att_path3 as imagePath3, itm_id
>> from assignment
>> join attachment on attachment.att_id = assignment.att_id
>>) imagePaths
>>on imagePaths.itm_id = items.itm_id
>> """
>>
>>
>> On 3/21/16, Simon King <si...@simonking.org.uk> wrote:
>>> Can you extract your code into a single standalone script that
>>> demonstrates
>>> the problem? This should be possible even with automap; the script can
>>> start by creating just the tables that are involved in this problem
>>> (ideally in an in-memory sqlite db), then use automap to map classes to
>>> those tables.
>>>
>>> Simon
>>>
>>> On Mon, Mar 21, 2016 at 3:12 PM, Alex Hall <ah...@autodist.com> wrote:
>>>
>>>> Wow, thanks guys, especially for the sample code! I'm trying to use
>>>> the example (and fully understand it at the same time) but am running
>>>> into an error. This is the same error that made m

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-25 Thread Alex Hall
Hi all,
Since SA was proving to be difficult to get working, and I was
spending way more time just trying to get it working than I was
actually running queries and outputting the results, I thought I'd
give Pyodbc a shot. Within a couple days, the query was working
perfectly. I'll post it below, as I'd be curious how this could be
made easier using SA. I don't know that I'll use SA for this project
since it's working so well in Pyodbc, but I'm curious all the same.
So, no hurry on this, it's only for my own interest. Anyway, the query
I finally came up with is below. It returns multiple rows for items
that have attributes, but other than that it works great. A for loop
with a check on the current ID takes care of making multiple rows for
the same item into a single row (recall this is all going to a
spreadsheet).


itemsQuery = """
 select items.itm_id as itemID, items.itm_proddesc as itemTitle,
items.itm_num as itemNumber, items.itm_listprice1 as msrp,
items.itm_dftuom as itemUnitOfMeasure, items.itm_manufitem as
itemManufacturer, items.itm_vendornum as itemVendorNumber,
  items.itm_weight as itemWeight, items.itm_width as itemWidth,
items.itm_length as itemLength, items.itm_height as itemHeight,
  attachments.description as description,
  imagePaths.imagePath1 as imagePath1, imagePaths.imagePath2 as
imagePath2, imagePaths.imagePath3 as imagePath3,
  attributes.attributeName as attributeName, attributes.attributeValue
as attributeValue,
  vendor.vendorName as vendorName
  from (
   select itm_id, itm_proddesc, itm_num, itm_vendornum,
itm_listprice1, itm_length, itm_width, itm_height, itm_weight,
itm_dftuom, itm_manufitem
   from item
   where itm_webflag <> 'Y' and itm_suspflag <> 'Y'
  ) items
  left outer join (
   select distinct attr_desc as attributeName, attr_value as
attributeValue, itm_id
   from attributevalueassign
   join attribute
   on attribute.attr_id = attributevalueassign.attr_id
   join attributevalue
   on attributevalue.attr_value_id = attributevalueassign.attr_value_id
   where attributevalueassign.itm_id = itm_id
  ) attributes
   on attributes.itm_id = items.itm_id
  left outer join (
   select PVUS15 as vendorName, PVVNNO as vendorNumber, itm_id
   from VENDR
   join item on item.itm_id = itm_id
  ) vendor
on vendor.vendorNumber = items.itm_vendornum and vendor.itm_id = items.itm_id
  left outer join (
 select attach_text.att_text as description, itm_id
  from assignment
   join attachment on attachment.att_id = assignment.att_id
   join attach_text on attach_text.att_id = assignment.att_id
   where assignment.itm_id = itm_id
  ) attachments
  on attachments.itm_id = items.itm_id
  left outer join (
   select attachment.att_path as imagePath1, attachment.att_path2 as
imagePath2, attachment.att_path3 as imagePath3, itm_id
   from assignment
   join attachment on attachment.att_id = assignment.att_id
  ) imagePaths
  on imagePaths.itm_id = items.itm_id
"""


On 3/21/16, Simon King <si...@simonking.org.uk> wrote:
> Can you extract your code into a single standalone script that demonstrates
> the problem? This should be possible even with automap; the script can
> start by creating just the tables that are involved in this problem
> (ideally in an in-memory sqlite db), then use automap to map classes to
> those tables.
>
> Simon
>
> On Mon, Mar 21, 2016 at 3:12 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Wow, thanks guys, especially for the sample code! I'm trying to use
>> the example (and fully understand it at the same time) but am running
>> into an error. This is the same error that made me look for a way
>> other than this last week.
>>
>> sqlalchemy.exc.InvalidRequestError: when initializing mapper
>> Mapper|assignmentTable|assignment, expression 'item' failed to to
>> locate an item (name 'item' is not defined). If this is a class name,
>> consider adding this relationship() to the
>>  class after both dependent classes
>> have been defined.
>>
>> This all starts from the line where my query begins:
>>
>> items = session.query(itemTable)\
>>
>> Again, I'm using automap. I put the class definitions in the same
>> place I put my vendor table definition last week, where it worked
>> perfectly. That's just after I set
>> base = automap_base()
>> but before I reflect anything. I can paste the full code if you want,
>> but it's pretty long.
>>
>> On 3/17/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>> >
>> >
>> > On 03/17/2016 03:11 PM, Alex Hall wrote:
>> >> Hello all,
>> >> It seems like I can't go a day without running into some kind of wall.
>> >> This one is a conceptual one regarding foreign keys. I have to somehow
>> >> get the same FK column in ta

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-21 Thread Alex Hall
Wow, thanks guys, especially for the sample code! I'm trying to use
the example (and fully understand it at the same time) but am running
into an error. This is the same error that made me look for a way
other than this last week.

sqlalchemy.exc.InvalidRequestError: when initializing mapper
Mapper|assignmentTable|assignment, expression 'item' failed to to
locate an item (name 'item' is not defined). If this is a class name,
consider adding this relationship() to the
 class after both dependent classes
have been defined.

This all starts from the line where my query begins:

items = session.query(itemTable)\

Again, I'm using automap. I put the class definitions in the same
place I put my vendor table definition last week, where it worked
perfectly. That's just after I set
base = automap_base()
but before I reflect anything. I can paste the full code if you want,
but it's pretty long.

On 3/17/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/17/2016 03:11 PM, Alex Hall wrote:
>> Hello all,
>> It seems like I can't go a day without running into some kind of wall.
>> This one is a conceptual one regarding foreign keys. I have to somehow
>> get the same FK column in table A pointing to IDs in tables B and C.
>
> So a real foreign key constraint is not capable of this.  Repurposing a
> single column to occasionally point to one table or another is a famous
> anti-pattern I've spoke of much (warning, this is *extremely* old, but
> the general idea still holds):
>
> http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-sqlalchemy/
>
>
> I have an updated version of all the various "polymoprhic association"
> examples in SQLAlchemy itself at
> http://docs.sqlalchemy.org/en/rel_1_0/orm/examples.html#module-examples.generic_associations.
>
>   This includes the "single column pointing to multiple tables" hack, as
> well as three other versions of the same business object geometry which
> preserve relational integrity within the schema design.
>
>>
>> At one person's suggestion, I'm making classes for my tables, even
>> though I'm using automap. This is to let me stop doing a ton of joins,
>> making querying much easier... I hope! I'm defining all the foreign
>> keys between my tables manually. For instance:
>>
>> class item(base):
>>   __tablename__ = "item"
>>   itm_id = Column(Integer, primary_key=True)
>>   vendornum = Column(String, ForeignKey(VENDR.PVVNNO))
>>
>> class vendorTable(base):
>>   __tablename__ = "VENDR"
>>   PVVNNO = Column(String, primary_key=True)
>>
>> If I've understood correctly, I'll now be able to say
>> item.vendornum.vendor_full_name
>> to get the vendor's full name for any item.
>>
>> Here's the problem. Items have attachments, and attached text,
>> respectively held in attach and attach_text tables. Binding them to
>> items is a table called assignment. Assignment is pretty
>> straightforward, with an itm_id and an attachment id (att_id). The
>> trouble is that this att_id occurs in both attach and attach_text. I
>> can make att_id a foreign key to one table or the other, but I'm not
>> sure how to make it go to both tables.
>
> the "generic_fk" example illustrates a pattern for working with this.
>
> Getting this all to work with automap is another layer of complexity,
> you certainly want all of this part of it laid out before you reflect
> the rest of the database columns.
>
>
>>
>> class assignmentTable(base):
>>   __tablename__ = "assignment"
>>   itm_id = Column(Integer, ForeignKey(item.itm_id))
>>   #the following column has to point to attach_text.att_id AS WELL
>>att_id = Column(Integer, ForeignKey(attachment.att_id))
>>   seq_num = Column(Integer)
>>   asn_primary = Column(Integer, nullable=True)
>>
>> class attachmentTable(base):
>>   __tablename__ = "attachment"
>>   att_id = Column(Integer, primary_key=True)
>>
>> class attachmentTextTable(base):
>>   __tablename__ = "attach_text"
>>   att_id = Column(Integer, primary_key=True)
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Outer joins?

2016-03-19 Thread Alex Hall
That would be the simplest. Having something so inefficient just bugs me. :)

I'm using MSSQL, so limit() works. Would yield_per() help here, or is
that for something different? Even if it didn't help local memory, but
just kept the load on the DB server down, that would be good.

On 3/16/16, Christopher Lee  wrote:
> It sounds like you should just fire it up with the outer joins and watch
> memory on the box.  If it gets too high, or crashes entirely, then you can
> look into different approaches.  For example, you could keep the outer
> joins, but paginate your query so that it is only pulling a subset of the
> rows from your main table (but fully joining against the secondary
> tables).  Just one caveat... if you are using MySQL, then LIMIT and OFFSET
> are not your friends; you'll want to find a different pagination mechanism.
>
> On Wed, Mar 16, 2016 at 10:29 AM, Jonathan Vanasco 
> wrote:
>
>> We all inherit less-than-ideal situations.
>>
>> If this is running once a day and isn't impacting performance or other
>> work, I wouldn't really worry about the huge join matrix.  It sounds like
>> the current solution is "good enough".  In a few weeks or months you'll
>> be
>> better acquainted with SqlAlchemy and Sql in general and can revisit.
>>
>> In terms of your 15minute script: When you can use subqueries, filters
>> and
>> `load_only` for certain columns, your backend will generate a smaller
>> matrix and there will be a less data transferred "over the wire".
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] joins instead of filters remove attributes of results

2016-03-19 Thread Alex Hall
Hello all,
I'm running a different query than yesterday. Before, I had something like:

items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\
.filter(attachmentTable.itm_id == itemTable.itm_id)\
#and so on, a bunch of .filter calls

Then, in the loop iterating over the results, I could do this:

for result in queryResults:
 itemID = result.item.itm_id

Now that I'm using a bunch of outer left joins, that code is suddenly
not working. I get an error when I say
result.item.itm_id
AttributeError: 'item' object has no attribute 'item'

The problem is that my query starts out with only one table passed to
session.query(), not all of them. Thus my result is of type 'item',
which is the table passed in. That would be okay, except that I need
to access values of other tables in the result, so even if I change
id = result.item.itm_id
to
id = result.itm_id
When I then say
description = result.attach_text.att_value
AttributeError: 'item' object has no attribute 'attach_text'

I know why it doesn't. What I don't know is how to get my query
results to hold all the information from all the tables, or how to
access it if they do already, but in a different way than before. My
new query is this:

items = session.query(itemTable)\
.outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
.outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.outerjoin(attachmentTextTable, assignmentTable.att_id ==
attachmentTextTable.att_id)\
.outerjoin(attributeValueAssignmentTable,
attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.outerjoin(attributeTable, attributeTable.attr_id ==
attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)\
.yield_per(1000)

I've also tried the same query, but with the first line changed to:
items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\

The problem here is that, while result.item.* works as expected, other
tables don't. For instance, result.attach_text.att_value yields an
AttributeError, 'None' type object has no attribute att_value.
Clearly, the other tables are in the result, but they're all None. I
expected something like that, and only added them back in to see if it
might help, but since I call query().outerjoin() I didn't think it
would work.

I should note that I renamed most of the tables by assigning variables
to base.classes.tableName, which is why I'm using "itemTable" here,
but in getting attributes of results I use just "item". The 'item'
table is called 'item', but I assigned it to a variable called
'itemTable', just for clarity in the script.

Is there a way to access the values of a query like this? At the very
least, is there a way I can print out all the objects the result
object has, so I can work out what to do? Thanks for any help!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Defining relationships (was: joins instead of filters remove attributes of results)

2016-03-18 Thread Alex Hall
If I define all the relationships as suggested, I could do
result.itm_id
or
result.attribute_value
and it would all work? Would I still need to specify, in my initial
query, things like
.filter(itemTable.itm_id = attachmentAssignmentTable.itm_id\
.filter(attachmentTable.att_id = attachmentAssignmentTable.att_id)

to get all attachments assigned to a given item? I'll read more about
this and play with it, but I wanted to ask here as well in case
someone sees that the design of this database will cause problems with
relationships.

On 3/16/16, Simon King <si...@simonking.org.uk> wrote:
> On Wed, Mar 16, 2016 at 1:07 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Hello all,
>> I'm running a different query than yesterday. Before, I had something
>> like:
>>
>> items = session.query(itemTable, attachmentTable, attachmentTextTable,
>> assignmentTable, attributeTable, attributeValueTable,
>> attributeValueAssignmentTable, vendorTable)\
>> .filter(attachmentTable.itm_id == itemTable.itm_id)\
>> #and so on, a bunch of .filter calls
>>
>> Then, in the loop iterating over the results, I could do this:
>>
>> for result in queryResults:
>>  itemID = result.item.itm_id
>>
>
> Because you wrote "session.query(itemTable, attachmentTable,
> attachmentTextTable)", the results that you get back from the query are
> like a tuple with 3 items corresponding to the 3 tables that you queries.
> result[0] would be the data from itemTable, result[1] is from
> attachmentTable, and result[2] is from attachmentTextTable. It also
> supports name-based access, which is why you were able to refer to
> "result.item" and "result.attach_text".
>
>
>>
>> Now that I'm using a bunch of outer left joins, that code is suddenly
>> not working. I get an error when I say
>> result.item.itm_id
>> AttributeError: 'item' object has no attribute 'item'
>>
>> The problem is that my query starts out with only one table passed to
>> session.query(), not all of them. Thus my result is of type 'item',
>> which is the table passed in. That would be okay, except that I need
>> to access values of other tables in the result, so even if I change
>> id = result.item.itm_id
>> to
>> id = result.itm_id
>> When I then say
>> description = result.attach_text.att_value
>> AttributeError: 'item' object has no attribute 'attach_text'
>>
>>
> First, it helps to be precise about your terminology. SQLAlchemy
> distinguishes between the object representing a table, and a class that you
> are mapping to that table. You've talked about using automapper in the
> past, so I assume you are passing a mapped class, not a table, to
> session.query().
>
> When you pass a single mapped class, the results you get back are instances
> of that class.
>
>
>
>> I know why it doesn't. What I don't know is how to get my query
>> results to hold all the information from all the tables, or how to
>> access it if they do already, but in a different way than before. My
>> new query is this:
>>
>> items = session.query(itemTable)\
>> .outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
>> .outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
>> .filter(assignmentTable.att_id == attachmentTable.att_id)\
>> .outerjoin(attachmentTextTable, assignmentTable.att_id ==
>> attachmentTextTable.att_id)\
>> .outerjoin(attributeValueAssignmentTable,
>> attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
>> .outerjoin(attributeTable, attributeTable.attr_id ==
>> attributeValueAssignmentTable.attr_id)\
>> .filter(attributeValueTable.attr_value_id ==
>> attributeValueAssignmentTable.attr_value_id)\
>> .yield_per(1000)
>>
>> I've also tried the same query, but with the first line changed to:
>> items = session.query(itemTable, attachmentTable, attachmentTextTable,
>> assignmentTable, attributeTable, attributeValueTable,
>> attributeValueAssignmentTable, vendorTable)\
>>
>> The problem here is that, while result.item.* works as expected, other
>> tables don't. For instance, result.attach_text.att_value yields an
>> AttributeError, 'None' type object has no attribute att_value.
>> Clearly, the other tables are in the result, but they're all None. I
>> expected something like that, and only added them back in to see if it
>> might help, but since I call query().outerjoin() I didn't think it
>> would work.
>>
>> I should note that I renamed most of the tables by assigning variables
>> to base.classes.tableName, which is why I'm using "itemTable&

Re: [sqlalchemy] Re: Outer joins?

2016-03-16 Thread Alex Hall

> On Mar 16, 2016, at 03:23, Jonathan Vanasco  wrote:
> 
> The database design you have is less than perfect.

I didn't make it, I came in long after it had been set up and now have to work 
with it. I can't re-do anything. They did it this way so that, for instance, a 
single attribute or attachment could apply to multiple items. If a thousand 
items are "large", and ten thousand have a "size" attribute, "size" and 'large" 
can both be written once instead of thousands of times. The problem is that it 
makes this hard to query, at least for someone not very experienced in DBA.

> The goal of having to reformat the relational DB into a CSV is less than 
> perfect.

The CSV is to give product details to our resellers in a format they can import 
automatically. As you say, flattening a relational database into what is 
essentially a single table isn't ideal, but it's what my work has always done 
so it's what I have to do. We have this running as a very convoluted SQL job I 
didn't write, with a ton of temporary tables, repeated code, and other fun 
things that make it hard to figure out what's going on. I know Python better 
than SQL, even if I have to learn SA and some DB concepts along the way, so 
this will be far easier to maintain once I get it working.

> 
> If I were you, I would think about 3 questions:
> 
> 1. How often do you have to run this?
Once a day or less. There's another script I'll eventually have to write that 
runs every fifteen minutes, but it has far less columns. Still, I'll meet to be 
able to grab items that lack related information, and the related information 
for items that have it.

> 2. Does it take too long?
No, I don't think it will. Even with the query that fails to get all items, it 
only takes 30 seconds total. I'm okay with it taking a few minutes.
> 3. Does it use up too much DB/Python memory?
That I don't know.
> 
> If this isn't a resource issue, and a 1x a day task... don't worry about it.  
> Let the non-optimal code run.
> 
> If you need to run this every 5 minutes, then I'd start to worry.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to sqlalchemy@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/sqlalchemy 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Outer joins?

2016-03-15 Thread Alex Hall
Thanks guys. I'm using automap, but I'm not completely sure how much
that gives me for free. Yes, these tables are big, and the resulting
set would be worrying large (potentially 5*20, and that's without
the attributes and attachments, plus their assignment and values
tables). I've switched to left outerjoins, hoping that that will keep
things smaller.

My original query is below. This works fine, but only gets items that
have vendor, attributes, and attachments. Items may have some or none
of these, as I was just informed today. As you can probably see,
item.itm_id ties everything together. It's used as the key for
assignmentTable, which holds all the attachments associated with the
item and uses att_id to index into attachmentTable and
attachmentTextTable. A similar relationship exists for attributes.

old_items = session.query(itemTable, attachmentTable,
attachmentTextTable, assignmentTable, attributeTable,
attributeValueTable, attributeValueAssignmentTable, vendorTable)\
.filter(vendorTable.PVVNNO == itemTable.itm_vendornum)\
.filter(assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.filter(assignmentTable.att_id == attachmentTextTable.att_id)\
.filter(attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.filter(attributeTable.attr_id == attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)

My next thought was to break this down into multiple queries:

allItems = session.query(items)\
.filter(items.itm_webflag != 'N', items.itm_suspflag != 'Y')

itemVendors = allItems.query(vendorTable).filter(vendorTable.PVVNNO ==
itemTable.itm_vendornum)

attachments = allItems.query(assignmentTable, attachmentTable,
attachmentTextTable)\
.filter(assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.filter(assignmentTable.att_id == attachmentTextTable.att_id)\

attributes = allItems.query(attributeTable, attributeValueTable,
attributeValueAssignmentTable)\
.filter(attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.filter(attributeTable.attr_id == attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)

The problem was, I couldn't work out how to put them together. Given
some item ID, how would I access that item's attributes or attachments
without making tons of queries back to the database?

Currently, I'm using this query, which I haven't yet even tested:

items = session.query(itemTable)\
.outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
.outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.outerjoin(attachmentTextTable, assignmentTable.att_id ==
attachmentTextTable.att_id)\
.outerjoin(attributeValueAssignmentTable,
attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.outerjoin(attributeTable, attributeTable.attr_id ==
attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)


On 3/15/16, Christopher Lee  wrote:
> Note that if your items have a lot of attributes and attachments, an
> outer-join will return a multiplicatively-large result set.  That will get
> boiled down into a sane number of objects by the SqlAlchemy ORM, but your
> performance might be ugly in terms of I/O to your database, or the
> processing time it takes to allocate the entire result set.  If the related
> tables are small, then querying all the data in a single query can be a lot
> faster.
>
> Anyway, it would help to see some code and know if you are using just the
> Core, or if you are using the ORM and have relationships defined.  You can
> pretty easily force an outer join on a relationship by setting the eager
> loading argument to "joined".  If you are using queries directly, then
> Jonathan's suggestions above should get you where you need to go.
>
>
>
> On Tue, Mar 15, 2016 at 10:17 AM, Jonathan Vanasco 
> wrote:
>
>> The ORM has an `outerjoin` method on queries:
>>
>> http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.outerjoin
>>
>> You can also pass "isouter=True" to `join`
>>
>> http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.join
>>
>> The core supports an outerjoin in both variations as well:
>>
>> http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=outer#sqlalchemy.sql.expression.Join.join
>>
>> http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=outer#sqlalchemy.sql.expression.Join.outerjoin
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this 

[sqlalchemy] Outer joins?

2016-03-15 Thread Alex Hall
Hi all,
I need to pull data from a bunch of tables, and I *think* outer joins
are the way to do it. However, I can't find much on SA's support for
outer joins.

What I'm trying to do is pull all items from the Items table, as well
as associated attachments and attributes if an item is tied to either
one. An item may or may not have attributes (stored in another table),
and may or may not have attachments (stored in yet another table).
Getting items that have both is what I already have working, but no
one told me that items can lack attachments and attributes until
today. I'm trying to work out how to do this, especially given that an
item could have attributes but no attachments, or attachments but no
attributes.

This is why an outer join seems to make the most sense. That way, I
get all items, with their attributes and attachments in place. If the
item lacks either or both, though, those values will simply be null. I
can't use query.filter, because filtering will exclude items without
these extra bits of data. In fact, I use filtering right now, and I
get 24,000 results; I should get over 65,000. I think I want something
like:

select * from items
 left outer join attributes
 on attribute.item_id = items.item_id
 left outer join attachments
 on attachment.item_id = items.item_id
 where items.flag <> 'Y'

If I'm thinking about this right, that query will do the job. I'll get
items no matter what, so long as the item's flag is not 'Y', and if
the item has more data associated with it, that will come along as
well. Multiple rows per item (such as four rows for item 1 if item1
has 2 attachments and 2 attributes) aren't a problem. The loop I have
to save all this to a CSV file already handles repeated IDs and puts
the data where it needs to go. I hope I've explained this well enough.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Adding 'where' to query

2016-03-14 Thread Alex Hall
Thanks for the clarification. I'm suddenly getting no results at all
when I add this filter, but at least now I know I'm doing the syntax
right. Never a dull moment. :)

On 3/14/16, Jonathan Vanasco  wrote:
>
>>
>> .filter(t1.c1=='hello', and_(t3.c1=='world'))
>>
>
> The and_ Is wrong in this context.  Everything in `filter` is joined by
> "and" by default.  You just want:
>
> .filter(t1.c1=='hello', t3.c1=='world')
>
> `and_` is usually used in a nested condition, often under an `or_`.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Adding 'where' to query

2016-03-14 Thread Alex Hall
I think I got it. I've been using .filter() for only joins thus far,
so somehow had it in my head that it was only for joining. Of course,
.filter(t1.c1=='hello')
will work. I believe I'm using and_ correctly if I say:
.filter(t1.c1=='hello', and_(t3.c1=='world'))
I may have that and_ part wrong, but filter is the obvious solution to
most of my question.

On 3/14/16, Alex Hall <ah...@autodist.com> wrote:
> Hi all,
> I had a link that was a great intro to querying, but of course, I
> can't find it now. I need to add a couple conditions to my query. In
> SQL, it might look like this:
>
> select *
>  from t1 join t2 on t1.c1==t2.c1
> join t3 on t3.c1==t1.c1
> where t1.c1 = 'hello' and t3.c3 = 'world'
>
> The joins I have, through query.filter(). It's the 'where' at the end
> that I'm not certain about. I know I've read how to do this, but I
> can't find that page anywhere. I also don't want to make it more
> complex than it needs to be. For instance, using "select" and putting
> that back into "query" when I don't need to. I've tried adding this
> after the last call to filter():
> .where(item.itm_webflag != 'N', and_(item.itm_suspflag != 'Y'))\
> But of course, SA says that query has no attribute 'where'.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Adding 'where' to query

2016-03-14 Thread Alex Hall
Hi all,
I had a link that was a great intro to querying, but of course, I
can't find it now. I need to add a couple conditions to my query. In
SQL, it might look like this:

select *
 from t1 join t2 on t1.c1==t2.c1
join t3 on t3.c1==t1.c1
where t1.c1 = 'hello' and t3.c3 = 'world'

The joins I have, through query.filter(). It's the 'where' at the end
that I'm not certain about. I know I've read how to do this, but I
can't find that page anywhere. I also don't want to make it more
complex than it needs to be. For instance, using "select" and putting
that back into "query" when I don't need to. I've tried adding this
after the last call to filter():
.where(item.itm_webflag != 'N', and_(item.itm_suspflag != 'Y'))\
But of course, SA says that query has no attribute 'where'.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
That worked! Thank you so much for your patience. Part of it was the
code, and part of it turned out to be that I was still using
vendorTable = base.classes.VENDR

It didn't occur to me that my VENDR class had taken over that part, so
base.classes would no longer contain VENDR. When I saw your asserts,
it struck me that the last piece of the puzzle might be to set
ventorDable = VENDR, and that seems to be doing the job beautifully.
Thanks again!

On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> metadata.reflect(..., extend_existing=True), here's a complete example
>
> from sqlalchemy import create_engine
>
> e = create_engine("mssql+pyodbc://scott:tiger@ms_2008", echo=True)
>
> with e.begin() as conn:
>
>  conn.execute("""
>  if not exists (select * from sysobjects where name='sometable'
> and xtype='U')
>  create table sometable (
>  id integer,
>  data varchar(20),
>  primary key (id)
>  )
>  """)
>
>  conn.execute("""
>  if not exists (select * from sysobjects where
> name='someothertable' and xtype='U')
>  create table someothertable (
>  id integer,
>  data varchar(20),
>  primary key (id)
>  )
>  """)
>
>  conn.execute("""
>  if not exists (select * from sysobjects where name='VENDR' and
> xtype='U')
>  create table [VENDR] (
>  [PVVNNO] integer,
>  [DATA] varchar(20)
>  )
>  """)
>
> from sqlalchemy.ext.automap import automap_base
> from sqlalchemy import MetaData, Column, String
> from sqlalchemy.orm import Session
>
> metadata = MetaData()
>
> desiredTables = ["sometable", "someothertable", "VENDR"]
> base = automap_base(metadata=metadata)
>
>
> class VENDR(base):
>  __tablename__ = "VENDR"
>  PVVNNO = Column(String, primary_key=True)
>
> metadata.reflect(e, only=desiredTables, extend_existing=True)
> assert 'VENDR' in metadata.tables
>
> base.prepare()
>
> assert VENDR.DATA
>
> sess = Session(e)
> print sess.query(VENDR).all()
>
>
>
>
> On 03/14/2016 10:21 AM, Alex Hall wrote:
>> I hate to say it, but... AttributeError: VENDR. I've moved different
>> lines all around, above and below the class definition, but nothing
>> I've tried works. The only change was when I put my declaration of
>> base below the class, and Python naturally said it didn't know what my
>> table class was inheriting from. I don't know why this is being such a
>> problem.
>>
>> On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> oh.  try it like this:
>>>
>>> class VENDR(base):
>>>  __tablename__ = "VENDR"
>>>  PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>>>
>>>  __table_args__ = {"extend_existing": True}
>>>
>>> that tells reflection to add new data to this Table object even though
>>> it already exists.
>>>
>>>
>>> On 03/14/2016 09:24 AM, Alex Hall wrote:
>>>> Thanks for that. Somehow, I'm getting the same error as before--the
>>>> VENDR table isn't being reflected. Here's the entire snippet, from
>>>> engine to trying to get the table.
>>>>
>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>> %(username, password, dsn))
>>>> session = Session(engine)
>>>> metadata = sqlalchemy.MetaData()
>>>> desiredTables = ["item", "assignment", "attachment", "attach_text",
>>>> "attribute", "attributevalue", "VENDR", "attributevalueassign"]
>>>> base = automap_base(metadata=metadata)
>>>> #pause here to make a table, since VENDR lacks a PK
>>>> class VENDR(base):
>>>>__tablename__ = "VENDR"
>>>>PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>>>> #done. Anyway...
>>>> metadata.reflect(engine, only=desiredTables)
>>>> base.prepare()
>>>>
>>>> itemTable = base.classes.item
>>>> assignmentTable = base.classes.assignment
>>>> attachmentTable = base.classes.attachment
>>>> attachmentTextTable = base.classes.attach_text
>>>> attributeTable = base.classes.attribute
>>>> attributeValueTable = base.classes.attributevalue
>>>&g

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
I hate to say it, but... AttributeError: VENDR. I've moved different
lines all around, above and below the class definition, but nothing
I've tried works. The only change was when I put my declaration of
base below the class, and Python naturally said it didn't know what my
table class was inheriting from. I don't know why this is being such a
problem.

On 3/14/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> oh.  try it like this:
>
> class VENDR(base):
> __tablename__ = "VENDR"
> PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>
> __table_args__ = {"extend_existing": True}
>
> that tells reflection to add new data to this Table object even though
> it already exists.
>
>
> On 03/14/2016 09:24 AM, Alex Hall wrote:
>> Thanks for that. Somehow, I'm getting the same error as before--the
>> VENDR table isn't being reflected. Here's the entire snippet, from
>> engine to trying to get the table.
>>
>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>> %(username, password, dsn))
>> session = Session(engine)
>> metadata = sqlalchemy.MetaData()
>> desiredTables = ["item", "assignment", "attachment", "attach_text",
>> "attribute", "attributevalue", "VENDR", "attributevalueassign"]
>> base = automap_base(metadata=metadata)
>> #pause here to make a table, since VENDR lacks a PK
>> class VENDR(base):
>>   __tablename__ = "VENDR"
>>   PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
>> #done. Anyway...
>> metadata.reflect(engine, only=desiredTables)
>> base.prepare()
>>
>> itemTable = base.classes.item
>> assignmentTable = base.classes.assignment
>> attachmentTable = base.classes.attachment
>> attachmentTextTable = base.classes.attach_text
>> attributeTable = base.classes.attribute
>> attributeValueTable = base.classes.attributevalue
>> attributeValueAssignmentTable = base.classes.attributevalueassign
>> vendorTable = base.classes.VENDR #AttributeError: VENDR
>>
>> I still don't quite see how base, metadata, and session all interact
>> to do what SA does, or I'd have a much easier time troubleshooting
>> this. I'm sure I just have something out of order, or some other
>> simple mistake.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> like this:
>>>
>>> class VENDR(MyAutomapBase):
>>>   __tablename__ = 'VENDR'
>>>
>>>   id = Column(Integer, primary_key=True)
>>>
>>> Above, the 'id' column name should match the column in the table that
>>> you'd like to consider as the primary key (and so should the type) - the
>>> "id" / "Integer" combination above is just an example.
>>>
>>> Then do the automap as you've done.   At the end, if it worked,
>>> Base.classes.VENDR should be the same class as the VENDR class above.
>>>
>>>
>>> On 03/11/2016 05:09 PM, Alex Hall wrote:
>>>> Sorry, do you mean the base subclass, or a new table class? In either
>>>> case, I'm not sure I see how this will fit into my automapping code. I
>>>> know this is all fairly basic, I just can't quite picture what goes
>>>> where and what inherits from/gets passed to what to make it automap
>>>> this VENDR table. If I could, I'd just add a PK column to the table
>>>> itself. Sadly, I can't change that kind of thing, only query it.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>> just make the class and include the PK column, then automap.  the rest
>>>>> of the columns should be filled in.
>>>>>
>>>>>
>>>>> On 03/11/2016 04:14 PM, Alex Hall wrote:
>>>>>> Ah, you're right. Every other table I've used in this database has
>>>>>> had
>>>>>> a key, and I didn't even notice that this VENDR table lacks one. That
>>>>>> explains the mystery! Thanks.
>>>>>>
>>>>>> Now to map this table. I've read the section of the docs on doing
>>>>>> this, and I get that I subclass base, set __table__ to be my VENDR
>>>>>> table, then set the key in my subclass. My question is how I access
>>>>>> the table, given that I can't automap it first. That is, if I can't
>>>>>> map the table because it has no PK, to what do I set __table__ in the
>>>>>> subclass that will let m

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-14 Thread Alex Hall
Thanks for that. Somehow, I'm getting the same error as before--the
VENDR table isn't being reflected. Here's the entire snippet, from
engine to trying to get the table.

engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
%(username, password, dsn))
session = Session(engine)
metadata = sqlalchemy.MetaData()
desiredTables = ["item", "assignment", "attachment", "attach_text",
"attribute", "attributevalue", "VENDR", "attributevalueassign"]
base = automap_base(metadata=metadata)
#pause here to make a table, since VENDR lacks a PK
class VENDR(base):
 __tablename__ = "VENDR"
 PVVNNO = sqlalchemy.Column(sqlalchemy.String, primary_key=True)
#done. Anyway...
metadata.reflect(engine, only=desiredTables)
base.prepare()

itemTable = base.classes.item
assignmentTable = base.classes.assignment
attachmentTable = base.classes.attachment
attachmentTextTable = base.classes.attach_text
attributeTable = base.classes.attribute
attributeValueTable = base.classes.attributevalue
attributeValueAssignmentTable = base.classes.attributevalueassign
vendorTable = base.classes.VENDR #AttributeError: VENDR

I still don't quite see how base, metadata, and session all interact
to do what SA does, or I'd have a much easier time troubleshooting
this. I'm sure I just have something out of order, or some other
simple mistake.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> like this:
>
> class VENDR(MyAutomapBase):
>  __tablename__ = 'VENDR'
>
>  id = Column(Integer, primary_key=True)
>
> Above, the 'id' column name should match the column in the table that
> you'd like to consider as the primary key (and so should the type) - the
> "id" / "Integer" combination above is just an example.
>
> Then do the automap as you've done.   At the end, if it worked,
> Base.classes.VENDR should be the same class as the VENDR class above.
>
>
> On 03/11/2016 05:09 PM, Alex Hall wrote:
>> Sorry, do you mean the base subclass, or a new table class? In either
>> case, I'm not sure I see how this will fit into my automapping code. I
>> know this is all fairly basic, I just can't quite picture what goes
>> where and what inherits from/gets passed to what to make it automap
>> this VENDR table. If I could, I'd just add a PK column to the table
>> itself. Sadly, I can't change that kind of thing, only query it.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> just make the class and include the PK column, then automap.  the rest
>>> of the columns should be filled in.
>>>
>>>
>>> On 03/11/2016 04:14 PM, Alex Hall wrote:
>>>> Ah, you're right. Every other table I've used in this database has had
>>>> a key, and I didn't even notice that this VENDR table lacks one. That
>>>> explains the mystery! Thanks.
>>>>
>>>> Now to map this table. I've read the section of the docs on doing
>>>> this, and I get that I subclass base, set __table__ to be my VENDR
>>>> table, then set the key in my subclass. My question is how I access
>>>> the table, given that I can't automap it first. That is, if I can't
>>>> map the table because it has no PK, to what do I set __table__ in the
>>>> subclass that will let me map the table?
>>>>
>>>> One post I found suggested something like this:
>>>>
>>>> vendorTable = Table("VENDR", metadata, column("PVVNNO",
>>>> primary_key=True))
>>>>
>>>> I'm guessing I'd have to add the column definitions for the other
>>>> columns if I did that. I'm further guessing that this replaces the
>>>> docs' method of subclassing, since the PK is now set. However, I don't
>>>> know if this would still work with automapping.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>> ah.  does VENDR have a primary key?   it won't be mapped if not.
>>>>>
>>>>> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 03/11/2016 12:47 PM, Alex Hall wrote:
>>>>>> VENDR is right there, in base.classes and metadata.tables. Yet,
>>>>>> vendorTable = base.classes.VENDR
>>>>>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>>>>>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>>>>>> the name before I try to access it, if t

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Sorry, do you mean the base subclass, or a new table class? In either
case, I'm not sure I see how this will fit into my automapping code. I
know this is all fairly basic, I just can't quite picture what goes
where and what inherits from/gets passed to what to make it automap
this VENDR table. If I could, I'd just add a PK column to the table
itself. Sadly, I can't change that kind of thing, only query it.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> just make the class and include the PK column, then automap.  the rest
> of the columns should be filled in.
>
>
> On 03/11/2016 04:14 PM, Alex Hall wrote:
>> Ah, you're right. Every other table I've used in this database has had
>> a key, and I didn't even notice that this VENDR table lacks one. That
>> explains the mystery! Thanks.
>>
>> Now to map this table. I've read the section of the docs on doing
>> this, and I get that I subclass base, set __table__ to be my VENDR
>> table, then set the key in my subclass. My question is how I access
>> the table, given that I can't automap it first. That is, if I can't
>> map the table because it has no PK, to what do I set __table__ in the
>> subclass that will let me map the table?
>>
>> One post I found suggested something like this:
>>
>> vendorTable = Table("VENDR", metadata, column("PVVNNO",
>> primary_key=True))
>>
>> I'm guessing I'd have to add the column definitions for the other
>> columns if I did that. I'm further guessing that this replaces the
>> docs' method of subclassing, since the PK is now set. However, I don't
>> know if this would still work with automapping.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> ah.  does VENDR have a primary key?   it won't be mapped if not.
>>>
>>> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 03/11/2016 12:47 PM, Alex Hall wrote:
>>>> VENDR is right there, in base.classes and metadata.tables. Yet,
>>>> vendorTable = base.classes.VENDR
>>>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>>>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>>>> the name before I try to access it, if that would help at all? This is
>>>> the only table in the CMS to have a name in all caps, but I need to
>>>> access it to look up manufacturer details for items.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>
>>>>> can you look in metadata.tables to see what it actually reflected ?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 03/11/2016 12:09 PM, Alex Hall wrote:
>>>>>> That's weird: the name I see is exactly what I've been using,
>>>>>> "VENDR".
>>>>>> All caps and everything. I tried using lowercase, just to see what it
>>>>>> would do, but it failed.
>>>>>>
>>>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>>>>>> Hello list,
>>>>>>>> Finally, a pure SA question from me. I'm using Automap and the
>>>>>>>> "only"
>>>>>>>> keyword to automap a subset of the tables in our CMS database. This
>>>>>>>> has worked perfectly thus far. Now, though, it's failing on a
>>>>>>>> specific
>>>>>>>> table, and the only difference I can see is that this table's name
>>>>>>>> is
>>>>>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>>>>>> shouldn't matter, right?
>>>>>>>
>>>>>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>>>>>> used.   How to handle this depends on the exact name that's in the
>>>>>>> database and if it truly does not match case-insensitively.
>>>>>>>
>>>>>>> Examine the output of:
>>>>>>>
>>>>>>> inspect(engine).get_table_names()
>>>>>>>
>>>>>>> find your table, and that's the name you should use.
>>>>>>>
>>>>

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Ah, you're right. Every other table I've used in this database has had
a key, and I didn't even notice that this VENDR table lacks one. That
explains the mystery! Thanks.

Now to map this table. I've read the section of the docs on doing
this, and I get that I subclass base, set __table__ to be my VENDR
table, then set the key in my subclass. My question is how I access
the table, given that I can't automap it first. That is, if I can't
map the table because it has no PK, to what do I set __table__ in the
subclass that will let me map the table?

One post I found suggested something like this:

vendorTable = Table("VENDR", metadata, column("PVVNNO", primary_key=True))

I'm guessing I'd have to add the column definitions for the other
columns if I did that. I'm further guessing that this replaces the
docs' method of subclassing, since the PK is now set. However, I don't
know if this would still work with automapping.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> ah.  does VENDR have a primary key?   it won't be mapped if not.
>
> what's in base.classes.keys() ?   base.classes['VENDR'] ?
>
>
>
>
>
>
> On 03/11/2016 12:47 PM, Alex Hall wrote:
>> VENDR is right there, in base.classes and metadata.tables. Yet,
>> vendorTable = base.classes.VENDR
>> raises an AttributeError. Odd! There's nothing cap-sensitive about
>> __hasattr__ that I'm forgetting, is there? Or, could I somehow alias
>> the name before I try to access it, if that would help at all? This is
>> the only table in the CMS to have a name in all caps, but I need to
>> access it to look up manufacturer details for items.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>
>>> can you look in metadata.tables to see what it actually reflected ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 03/11/2016 12:09 PM, Alex Hall wrote:
>>>> That's weird: the name I see is exactly what I've been using, "VENDR".
>>>> All caps and everything. I tried using lowercase, just to see what it
>>>> would do, but it failed.
>>>>
>>>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>>>
>>>>>
>>>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>>>> Hello list,
>>>>>> Finally, a pure SA question from me. I'm using Automap and the "only"
>>>>>> keyword to automap a subset of the tables in our CMS database. This
>>>>>> has worked perfectly thus far. Now, though, it's failing on a
>>>>>> specific
>>>>>> table, and the only difference I can see is that this table's name is
>>>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>>>> shouldn't matter, right?
>>>>>
>>>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>>>> used.   How to handle this depends on the exact name that's in the
>>>>> database and if it truly does not match case-insensitively.
>>>>>
>>>>> Examine the output of:
>>>>>
>>>>> inspect(engine).get_table_names()
>>>>>
>>>>> find your table, and that's the name you should use.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Stranger still, the actual reflection doesn't
>>>>>> error out. Later, where I try to assign base.classes.MYTABLE to a
>>>>>> variable, is where I get an AttributeError. Here's my code:
>>>>>>
>>>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>>>> %(username, password, dsn))
>>>>>> base = automap_base()
>>>>>> session = Session(engine)
>>>>>> metadata = sqlalchemy.MetaData()
>>>>>> desiredTables = ["table", "othertable", "VENDR"]
>>>>>> metadata.reflect(engine, only=desiredTables) #works fine
>>>>>>
>>>>>> table = base.classes.table #fine
>>>>>> table2 = base.classes.othertable #fine
>>>>>> vendorTable = base.classes.VENDR #AttributeError
>>>>>>
>>>>>> I've added and removed tables as I adjust this script, and all of
>>>>>> them
>>>>>> work perfectly. This VENDR table is the first one in two days to
>>>>>> cause
>>>>>> problems. If I iterate over all the classes in base.classes and print
>>>&

Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
VENDR is right there, in base.classes and metadata.tables. Yet,
vendorTable = base.classes.VENDR
raises an AttributeError. Odd! There's nothing cap-sensitive about
__hasattr__ that I'm forgetting, is there? Or, could I somehow alias
the name before I try to access it, if that would help at all? This is
the only table in the CMS to have a name in all caps, but I need to
access it to look up manufacturer details for items.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
> can you look in metadata.tables to see what it actually reflected ?
>
>
>
>
>
>
> On 03/11/2016 12:09 PM, Alex Hall wrote:
>> That's weird: the name I see is exactly what I've been using, "VENDR".
>> All caps and everything. I tried using lowercase, just to see what it
>> would do, but it failed.
>>
>> On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>>
>>>
>>> On 03/11/2016 09:39 AM, Alex Hall wrote:
>>>> Hello list,
>>>> Finally, a pure SA question from me. I'm using Automap and the "only"
>>>> keyword to automap a subset of the tables in our CMS database. This
>>>> has worked perfectly thus far. Now, though, it's failing on a specific
>>>> table, and the only difference I can see is that this table's name is
>>>> in all caps, whereas the rest are all lowercase. Capitalization
>>>> shouldn't matter, right?
>>>
>>> it does, as ALLCAPS is case sensitive and indicates quoting will be
>>> used.   How to handle this depends on the exact name that's in the
>>> database and if it truly does not match case-insensitively.
>>>
>>> Examine the output of:
>>>
>>> inspect(engine).get_table_names()
>>>
>>> find your table, and that's the name you should use.
>>>
>>>
>>>
>>>
>>> Stranger still, the actual reflection doesn't
>>>> error out. Later, where I try to assign base.classes.MYTABLE to a
>>>> variable, is where I get an AttributeError. Here's my code:
>>>>
>>>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>>>> %(username, password, dsn))
>>>> base = automap_base()
>>>> session = Session(engine)
>>>> metadata = sqlalchemy.MetaData()
>>>> desiredTables = ["table", "othertable", "VENDR"]
>>>> metadata.reflect(engine, only=desiredTables) #works fine
>>>>
>>>> table = base.classes.table #fine
>>>> table2 = base.classes.othertable #fine
>>>> vendorTable = base.classes.VENDR #AttributeError
>>>>
>>>> I've added and removed tables as I adjust this script, and all of them
>>>> work perfectly. This VENDR table is the first one in two days to cause
>>>> problems. If I iterate over all the classes in base.classes and print
>>>> each one, I don't even see it in that list, so SA isn't simply
>>>> transforming the name. This is probably a simple thing, but I don't
>>>> see the problem. Thanks for any suggestions.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
That's weird: the name I see is exactly what I've been using, "VENDR".
All caps and everything. I tried using lowercase, just to see what it
would do, but it failed.

On 3/11/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>
>
> On 03/11/2016 09:39 AM, Alex Hall wrote:
>> Hello list,
>> Finally, a pure SA question from me. I'm using Automap and the "only"
>> keyword to automap a subset of the tables in our CMS database. This
>> has worked perfectly thus far. Now, though, it's failing on a specific
>> table, and the only difference I can see is that this table's name is
>> in all caps, whereas the rest are all lowercase. Capitalization
>> shouldn't matter, right?
>
> it does, as ALLCAPS is case sensitive and indicates quoting will be
> used.   How to handle this depends on the exact name that's in the
> database and if it truly does not match case-insensitively.
>
> Examine the output of:
>
> inspect(engine).get_table_names()
>
> find your table, and that's the name you should use.
>
>
>
>
> Stranger still, the actual reflection doesn't
>> error out. Later, where I try to assign base.classes.MYTABLE to a
>> variable, is where I get an AttributeError. Here's my code:
>>
>> engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
>> %(username, password, dsn))
>> base = automap_base()
>> session = Session(engine)
>> metadata = sqlalchemy.MetaData()
>> desiredTables = ["table", "othertable", "VENDR"]
>> metadata.reflect(engine, only=desiredTables) #works fine
>>
>> table = base.classes.table #fine
>> table2 = base.classes.othertable #fine
>> vendorTable = base.classes.VENDR #AttributeError
>>
>> I've added and removed tables as I adjust this script, and all of them
>> work perfectly. This VENDR table is the first one in two days to cause
>> problems. If I iterate over all the classes in base.classes and print
>> each one, I don't even see it in that list, so SA isn't simply
>> transforming the name. This is probably a simple thing, but I don't
>> see the problem. Thanks for any suggestions.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] reflection fails on table with name in all caps

2016-03-11 Thread Alex Hall
Hello list,
Finally, a pure SA question from me. I'm using Automap and the "only"
keyword to automap a subset of the tables in our CMS database. This
has worked perfectly thus far. Now, though, it's failing on a specific
table, and the only difference I can see is that this table's name is
in all caps, whereas the rest are all lowercase. Capitalization
shouldn't matter, right? Stranger still, the actual reflection doesn't
error out. Later, where I try to assign base.classes.MYTABLE to a
variable, is where I get an AttributeError. Here's my code:

engine = sqlalchemy.create_engine("mssql+pyodbc://%s:%s@%s"
%(username, password, dsn))
base = automap_base()
session = Session(engine)
metadata = sqlalchemy.MetaData()
desiredTables = ["table", "othertable", "VENDR"]
metadata.reflect(engine, only=desiredTables) #works fine

table = base.classes.table #fine
table2 = base.classes.othertable #fine
vendorTable = base.classes.VENDR #AttributeError

I've added and removed tables as I adjust this script, and all of them
work perfectly. This VENDR table is the first one in two days to cause
problems. If I iterate over all the classes in base.classes and print
each one, I don't even see it in that list, so SA isn't simply
transforming the name. This is probably a simple thing, but I don't
see the problem. Thanks for any suggestions.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Ways of processing multiple rows with same ID?

2016-03-10 Thread Alex Hall
What I'm doing, and sorry for not explaining further, is making a CSV
file of data. Each row is a row in my results, or would be if I were
just selecting from products. Having to select from attributes as well
is where I'm having problems. Each product can have multiple
attributes, and each attribute value can be assigned to multiple
products. Joining everything (by using filter()) is giving me way too
many results to deal with effectively.

Say we had a product with an ID of 001, a name of "widget", and an
image of "images/001.jpg". This product has a weight and a color, but
those attributes are in the attributevalues table.

Attributes work something like this: attributeassignments has a
product ID and an attribute ID. Attributes has an attribute ID and an
attribute name ("color", "size", and so on). AttributeValues has an
attribute ID and a value ("blue", "55", etc).

For our widget, attributes might have 001, 001, "size"; 001, 002,
"color"; 001, 003, "weight". 001 is the product ID of the widget.
AttributeAssignment might have 001, 001; 001, 002; 001, 003. 001 is
the widget, and the second numbers are the IDs of the different
attributes.
AttributeValues might have 001, "1x2x3"; 002, "blue"; 003, "55".

In the CSV file, I want to put each of those three attributes under a column:
001, Widget, images/001.jpg, blue, 55, 1x2x3

Currently, I'm iterating over the results. The first line inside the
loop, I check the current result's ID and compare it to the previous
one. If they match, I assume I'm on the same result, so I get the
values of the attributes in the row. If the two IDs differ, I assume
I'm done. I write out the values for the last result, clear out the
array I use to store all the values, and grab the new values that
aren't attributes.

My current query does so much joining that my results are too large to
manage, though. The very first iteration works perfectly, but then I
get stuck with the same product ID number. Even when I raise the query
limit to 100,000, I never see any other product ID than that first
one. It feels like an infinite loop, but my loop is simply,
items = 
for result in items:
 ...

I hope this makes more sense. I've re-read the ORM tutorial as
Jonathan suggested, too. The last bit, about many-to-many
relationships, seems like it might be useful. I don't quite follow it
all, but hopefully some of it will make more sense the more I re-read
it.

On 3/10/16, Jonathan Vanasco  wrote:
> 2 comments:
>
> 1.  Go through the SqlAlchemy ORM tutorial.  What you're describing right
> now is dancing around some very basic relationship definition and
> loading/query techniques.
>
> 2. You haven't described what you actually want to accomplish, only some
> general ideas of how you think you could interact with data.  If you want
> to grab a matrix that has every product, there will be one set of ways to
> approach the problem.  If you want to return an iterable list of all
> variations of a certain product, there will be another set of ways.  If you
>
> want to search by specific criteria, there will be yet another set of ways.
>
>  depending on what you do with the data, the query will be different.
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Ways of processing multiple rows with same ID?

2016-03-10 Thread Alex Hall
Hi list,
I'm not sure how to explain this, so let me know if I lose you. I have
the same products database as yesterday, but I've just learned that
product attributes are stored in their own tables. A product can have
many attributes (size, color, weight, etc), and each attribute value
is in a table. That table is tied to the product through an attribute
assignment table, which lets us write "large" once and then assign
that to thousands of products at once, for instance. Essentially, the
item_id is a foreign key into attributeAssignment, which also has an
attr_value_id. That attr_value_id matches the PK in attr_values, which
is the table that actually holds the attribute text.

The problem is that, when I use filter() to join all this stuff
together, I get valueCount*productCount rows. That's not really a
problem, actually, as it's doing what I want. Putting things back
together is going to be a challenge, though. I essentially want, for
example, color and size under the same product ID, but my current
query will return two different rows with the same ID. One row will
have the color, and the next row will have the size. I don't think I
can flatten these out, so my next idea is doing post-query processing
as I iterate through the results.

I'm tempted to just hard-code a sub-loop, to iterate through each n
rows, knowing that n will be the number of rows that share an ID.
Using grouping should make that work. My fear is that I'll get a set
of data which, somehow, has a different size--maybe a missing
attribute--and thus my entire system will be off a row or two. My next
idea is to store the ID of the first row inside the for loop iterating
through all the rows. In that for loop is a while loop: while
IDOfNextRow==currentID: (check IDOfNewRow). That way, I can keep
related rows together and manually pull out the data I need for each
one. Using group-by, I shouldn't ever have a case where a used ID
surfaces again way down the line.

Is there an easier way I haven't thought of? I can't be the first
person to run into this, and SA has a lot of powerful features that
make doing DB work easy, so maybe there's something I just don't know
about. As I said, I'm not so new to SQL that I just started last week,
but neither am I any kind of experienced user at all; maybe SQL itself
can offer something here. Thanks for any information/ideas anyone has,
and again, let me know if I haven't explained this well enough.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: properties of query results if names overlap?

2016-03-09 Thread Alex Hall
I think I answered my own question: the result variable gets
properties named for the column names, as usual, but those properties
are each under their respective table names. Those table names come
from the actual table name (I'm using auto-map) or, presumably, the
__tablename__ variable for declarative bases. That is:

for result in results:
 print result.items.itm_id
 print result.assignments.itm_id

At least, this is working for now. Please let me know if I'm missing a
piece or need to know more before some future bit of code turns out to
hide a gotcha I don't yet know about. Thanks.

On 3/9/16, Alex Hall <ah...@autodist.com> wrote:
> Hi all,
> Just a quick question: what does SA do if names overlap? For example,
> in assignmentTable, there's a column called itm_id. In
> attachmentTable, there's also a column called itm_id, and there's one
> in itemTable as well. If I combine these in a kind of join, as in:
>
> results = session.query(assignmentTable, attachmentTable)\
>  .filter(assignmentTable.itm_id == attachmentTable.itm_id)\
>  .all()
>
> for result in results:
>  print result.itm_id
>
> What will that print? I love SA's named properties for query results,
> and would much rather use them than indexes if possible. Is this just
> not allowed for name overlaps, or does SA do some trick with the table
> names to allow it? Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
That makes sense. Part of my problem is that, as I've mentioned in the
past, I was recently hired. I didn't set anything up, and I still
don't know for sure what I can trust to be unique, or 6 versus 8
characters, or a lot of other small details. That said, SSMS shows the
item ID as a primary key, which means it is unique. I think I'm safe
to just apply distinct() to my entire query, since there's no way the
ID can ever be repeated. I've been looking at a bunch of tables today,
and I had it in my head that the id in this one was only *part of* the
PK and thus could be duplicated. At least I learned something from all
this. Thanks again for the help, guys.

On 3/9/16, Jonathan Vanasco <jonat...@findmeon.com> wrote:
>
> On Wednesday, March 9, 2016 at 3:02:05 PM UTC-5, Alex Hall wrote:
>>
>> Fair enough, thanks. I didn't realize it was such a complex task; I
>> figured it was just a matter of passing an argument to distinct() or
>> something equally easy.
>>
>
>
> Yeah PostgreSQL is the only db that supports "DISTINCT ON"... but it can be
>
> very awkward.
>
> Let me try to explain this better, because I was you a few years ago -- and
>
> thought / believed the same things.  (and this still annoys me!)
>
> Here's a table to represent an imaginary situation where  `id` is the
> primary key (it is unique) but the other columns aren't.
>
> id | product_id | name
> ===++=
> 1  | 1  | foo
> 2  | 1  | bar
> 3  | 2  | biz
> 4  | 2  | bang
> 5  | 3  | foo
> 6  | 1  | bar
>
> The distinct column values are:
>
> id - 1,2,3,4,5
> product_id - 1, 2, 3
> name - foo, bar, biz, bang
>
> If you want to get distinct data from the table though, you need to think
> in rows.  (unless you're querying for column data)
>
> If you want "distinct" rows based on the product id, how should these 3
> rows be handled?
>
> 1  | 1  | foo
> 2  | 1  | bar
> 6  | 1  | bar
>
> They all have 1 for the product_id.
>
> The rows are all distinct if we think of the primary id key being an
> attribute.
> If we limit the distinction to the product_id and the name, we can drop the
>
> 3 down to 2 combinations:
>
> 1  | foo
> 1  | bar
>
> But this probably won't work for your needs.
>
> The (1, foo) row corresponds to id 1;
> but the (1, bar) row could correspond to (2,1,bar) or (6,1,bar) rows in the
>
> table.
>
> So when you say only want rows "where the item number is distinct.", you
> should try asking "What should I do with rows where the item_number isn't
> distinct?"
>
> That should raise some red flags for you, and help you realize that you
> probably don't really want rows where the item number is distinct.You
> probably want to do some other query and approach some other goal.
>
> "DISTINCT" is (usually) a really complex situation because people often
> think it will do one thing, but it does something very different... and to
> accomplish the task they want, it's a totally different query.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] properties of query results if names overlap?

2016-03-09 Thread Alex Hall
Hi all,
Just a quick question: what does SA do if names overlap? For example,
in assignmentTable, there's a column called itm_id. In
attachmentTable, there's also a column called itm_id, and there's one
in itemTable as well. If I combine these in a kind of join, as in:

results = session.query(assignmentTable, attachmentTable)\
 .filter(assignmentTable.itm_id == attachmentTable.itm_id)\
 .all()

for result in results:
 print result.itm_id

What will that print? I love SA's named properties for query results,
and would much rather use them than indexes if possible. Is this just
not allowed for name overlaps, or does SA do some trick with the table
names to allow it? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
Fair enough, thanks. I didn't realize it was such a complex task; I
figured it was just a matter of passing an argument to distinct() or
something equally easy. Speed isn't a huge concern, so I suppose I
could get around this by storing the item numbers I find and then
checking that the row I'm about to use doesn't have a number in that
set. Still, there could be hundreds of thousands of items, so that
might not be the best plan. Anyway, I'll look into it more.

On 3/9/16, Jonathan Vanasco  wrote:
> It would probably be best for you to figure out the correct raw sql you
> want, then convert it to SqlAlchemy.
>
> Postgres is the only DB I know of that offers "DISTINCT ON (columns)" --
> and even that works a bit awkward.
>
> The query that you want to do isn't actually simple -- there are concerns
> with how to handle duplicate rows (based on the distinct field).Often
> people will use "GROUP BY" + "ORDER BY" along with distincts, subselects
> and misc database functions.
>
> If I were in your place, I would read through some DB tutorials and
> StackOverflow questions on how people are dealing with similar problems.
>  That should help you learn.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Select * but apply distinct to one column

2016-03-09 Thread Alex Hall
Hi all,
I want to select * from a table, getting all columns. However, the
only rows I want are where the item number is distinct. I've got:
items = session.query(itemTable)\
.distinct()\
.limit(10)
But that doesn't apply "distinct" to just item_number. I'm not the
best with SQL in general or I'd express the query I want so you could
see it. Hopefully my explanation is clear enough.

After my fighting with the iSeries here at work a few weeks ago, I set
up SA to access a Microsoft SQL database yesterday... In about two
hours. That includes setting up the DSN, getting the database name
wrong, getting the credentials wrong, and other non-SA problems. With
all that, SA itself was a breeze. It's amazing what happens when you
don't try to use IBM machines in the mix. :)

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Indeed, the ibm_db list told me that testing with pyodbc was limited.
I'd skip pyodbc, but so far, it's the *only* package that is able to
connect me to the server, so I have to use it. Hopefully I can get
more information from ibm_db.

Adding the properties dbms_ver and dbms_name was a good idea. When I
do it in my iSeriesConnect() function (where I connect through
pyodbc), Python says that my connection object has no attribute
dbms_ver. I suppose I could subclass it, but it seems like this can't
be the right way to go--I don't know if the version is used for some
important check way off in the code somewhere, and if guessing values
will thus cause unforseen problems. Still, it's worth a try.

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> I guess that's a symptom of the ibm_db_sa package not being very well
> tested with pyodbc
>
> I'm very confused by pyodbc's version numbers -
> https://pypi.python.org/pypi/pyodbc/3.0.10 suggests that version 3.0.10
> exists and was uploaded on 2015-04-29, but also says that the latest
> version is 2.1.9, which was uploaded on 2015-09-24. I've never used pyodbc
> so don't know what to make of that.
>
> I assume your error is coming from this line:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/base.py#L696
>
> ...which is in the base dialect, rather than the pyodbc-specific bit. Maybe
> you could hack around the problem by setting a "dbms_ver" attribute on the
> pyodbc connection that you are creating in your custom creator function.
> This is where it gets used:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/base.py#L481
>
> Simon
>
> On Fri, Feb 19, 2016 at 5:00 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> That makes more sense, but as soon as I put "+pyodbc" in, I'm back to
>> last week's "pyodbc.Connection object has no attribute dbms_ver"
>> error. Pyodbc seems to be the problem, which is ironic--on its own,
>> pyodbc is the only way I've been able to talk to the server at all.
>> Add it to SA, though, and that attribute error appears.
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > According to
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#registering-new-dialects
>> ,
>> > a dialect registered as "db2.pyodbc" should be specified in the URL as
>> > "db2+pyodbc://". Does that make any difference?
>> >
>> > On Fri, Feb 19, 2016 at 4:20 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> Thanks. I tried both, and triedother variations including or excluding
>> >> the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
>> >> I get:
>> >> sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
>> >> [my connection string]".
>> >>
>> >> If I don't get that, it's because I used a name that complains about
>> >> there being no attribute dbms_ver or server.version, depending on the
>> >> string.
>> >>
>> >> They don't make it easy, do they?
>> >>
>> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> > URI prefixes are defined in the setup.py for the ibm_db_sa package:
>> >> >
>> >> >
>> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>> >> >
>> >> > I would guess that you want to end up with the DB2Dialect_pyodbc
>> class,
>> >> > which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>> >> >
>> >> > Simon
>> >> >
>> >> >
>> >> > On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com>
>> wrote:
>> >> >
>> >> >> Thanks, that looks like what I'm looking for. I assume specifying
>> >> >> "ibm_db_sa://" for the string will let SA use the proper dialect?
>> >> >>
>> >> >> I'm now getting "pyodbc.Connection object has no attribute
>> >> >> server_info", in case anyone happens to know what that's about. I'm
>> >> >> getting nightmarish flashbacks to my "has no attribute" error last
>> >> >> week for the same object. But at least this is a different one;
>> >> >> I'll
>> >> >> count it as a good thing!
>> >> >>
>> >> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> >> > On Fri, Feb 19, 2016 at 2:38 PM,

Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
That makes more sense, but as soon as I put "+pyodbc" in, I'm back to
last week's "pyodbc.Connection object has no attribute dbms_ver"
error. Pyodbc seems to be the problem, which is ironic--on its own,
pyodbc is the only way I've been able to talk to the server at all.
Add it to SA, though, and that attribute error appears.

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> According to
> http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#registering-new-dialects,
> a dialect registered as "db2.pyodbc" should be specified in the URL as
> "db2+pyodbc://". Does that make any difference?
>
> On Fri, Feb 19, 2016 at 4:20 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Thanks. I tried both, and triedother variations including or excluding
>> the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
>> I get:
>> sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
>> [my connection string]".
>>
>> If I don't get that, it's because I used a name that complains about
>> there being no attribute dbms_ver or server.version, depending on the
>> string.
>>
>> They don't make it easy, do they?
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > URI prefixes are defined in the setup.py for the ibm_db_sa package:
>> >
>> > https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>> >
>> > I would guess that you want to end up with the DB2Dialect_pyodbc class,
>> > which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>> >
>> > Simon
>> >
>> >
>> > On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> Thanks, that looks like what I'm looking for. I assume specifying
>> >> "ibm_db_sa://" for the string will let SA use the proper dialect?
>> >>
>> >> I'm now getting "pyodbc.Connection object has no attribute
>> >> server_info", in case anyone happens to know what that's about. I'm
>> >> getting nightmarish flashbacks to my "has no attribute" error last
>> >> week for the same object. But at least this is a different one; I'll
>> >> count it as a good thing!
>> >>
>> >> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> >> > On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com>
>> wrote:
>> >> >
>> >> >> As the subject says, I am connected to our iSeries through straight
>> >> >> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> >> >> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> >> >> exact same error I was getting when using ibm_db directly. Using
>> >> >> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> >> >> that's the key.
>> >> >>
>> >> >> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> >> >> dialect, or specify the driver to SA directly? Thanks!
>> >> >>
>> >> >>
>> >> > You can pass a "creator" argument to create_engine if you want to
>> >> > create
>> >> > the connection yourself:
>> >> >
>> >> >
>> >>
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>> >> >
>> >> >
>> >>
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>> >> >
>> >> > Hope that helps,
>> >> >
>> >> > Simon
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "sqlalchemy" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to sqlalchemy+unsubscr...@googlegroups.com.
>> >> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> >> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >>

Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Thanks. I tried both, and triedother variations including or excluding
the module name as a prefix (ibm_db_sa.db2.pyodbc://). In most cases,
I get:
sqlalchemy.exc.ArgumentError: could not parse RFC1738 URL from string
[my connection string]".

If I don't get that, it's because I used a name that complains about
there being no attribute dbms_ver or server.version, depending on the
string.

They don't make it easy, do they?

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> URI prefixes are defined in the setup.py for the ibm_db_sa package:
>
> https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/setup.py
>
> I would guess that you want to end up with the DB2Dialect_pyodbc class,
> which means you should use db2.pyodbc:// or ibm_db_sa.pyodbc://
>
> Simon
>
>
> On Fri, Feb 19, 2016 at 3:33 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Thanks, that looks like what I'm looking for. I assume specifying
>> "ibm_db_sa://" for the string will let SA use the proper dialect?
>>
>> I'm now getting "pyodbc.Connection object has no attribute
>> server_info", in case anyone happens to know what that's about. I'm
>> getting nightmarish flashbacks to my "has no attribute" error last
>> week for the same object. But at least this is a different one; I'll
>> count it as a good thing!
>>
>> On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
>> > On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> As the subject says, I am connected to our iSeries through straight
>> >> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> >> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> >> exact same error I was getting when using ibm_db directly. Using
>> >> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> >> that's the key.
>> >>
>> >> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> >> dialect, or specify the driver to SA directly? Thanks!
>> >>
>> >>
>> > You can pass a "creator" argument to create_engine if you want to
>> > create
>> > the connection yourself:
>> >
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>> >
>> >
>> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>> >
>> > Hope that helps,
>> >
>> > Simon
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+unsubscr...@googlegroups.com.
>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>> >
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
Thanks, that looks like what I'm looking for. I assume specifying
"ibm_db_sa://" for the string will let SA use the proper dialect?

I'm now getting "pyodbc.Connection object has no attribute
server_info", in case anyone happens to know what that's about. I'm
getting nightmarish flashbacks to my "has no attribute" error last
week for the same object. But at least this is a different one; I'll
count it as a good thing!

On 2/19/16, Simon King <si...@simonking.org.uk> wrote:
> On Fri, Feb 19, 2016 at 2:38 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> As the subject says, I am connected to our iSeries through straight
>> pyodbc. That seems to run perfectly. Now, is there a way to use SA
>> with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
>> exact same error I was getting when using ibm_db directly. Using
>> pyodbc, I can specify the driver to be used, and I'm pretty sure
>> that's the key.
>>
>> Can I either use my pyodbc connection with SA and ibm_db_sa for the
>> dialect, or specify the driver to SA directly? Thanks!
>>
>>
> You can pass a "creator" argument to create_engine if you want to create
> the connection yourself:
>
> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#custom-dbapi-connect-arguments
>
> http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#sqlalchemy.create_engine.params.creator
>
> Hope that helps,
>
> Simon
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] connected using pyodbc; how to hook that to SA?

2016-02-19 Thread Alex Hall
As the subject says, I am connected to our iSeries through straight
pyodbc. That seems to run perfectly. Now, is there a way to use SA
with that connection? When I use "ibm_db_sa+pyodbc://..." I get the
exact same error I was getting when using ibm_db directly. Using
pyodbc, I can specify the driver to be used, and I'm pretty sure
that's the key.

Can I either use my pyodbc connection with SA and ibm_db_sa for the
dialect, or specify the driver to SA directly? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] OT: basic ibm_db script hangs while connecting (wasreflection taking a very long time?)

2016-02-17 Thread Alex Hall
I did as suggested and cut my script back. It still hangs while trying
to connect, or errors out if I try different ports (no surprise
there). Below I'll paste the message I just sent to the ibm_db email
list. At least, if this is identical to what ibm_db_sa is doing, it
means that fixing it here should let SA suddenly start working
properly.

I've cut back to a very, very basic script, using only ibm_db and
nothing else. I'm running into exactly what I did when trying to use
sqlalchemy: one of two errors, or an endless waiting period as I wait
in vain for an answer from the server or a timeout. My script:

import ibm_db
dbConnection = 
ibm_db.pconnect("DATABASE=myLibraryName;HOSTNAME=1.2.3.4;PORT="+port+";PROTOCOL=TCPIP;UID=username;PWD=password",
"", "")
print ibm_db.conn_errormsg()

I got the connection string from
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.swg.im.dbclient.python.doc/doc/t0054368.html

I made the port number a variable because it is the thing that keeps
giving me different results. The docs say to use 8471 or 9471 for
database access (the latter for SSL), but those are when I get no
response whatsoever. I've also tried 5, 6, and 446, all of
which return errors immediately. The high numbers give me
SQLCode-30081, and 446 gives me -30020. I even tried a DSN, but I got
an error claiming it couldn't locate the specified DSN even though
said DSN is right in the list when I open up ODBC Manager.

The thing is, we have at least five computers that talk to this 400
for hours every day, so I know it can accept incoming connections. The
computer on which I'm running this stuff can even do it, using the
same software the other stations use, so I know my machine has the
right drivers. Is there anything else I could try? I don't know much
about the 400 itself, and it definitely works with all our current
stations with no problems at all. That said, is there something on it
that I should check? Anything anyone can think of will help. Thanks.

On 2/17/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Tue, Feb 16, 2016 at 04:02:08PM -0500, Alex Hall wrote:
>> Great; I was hoping you wouldn't say that. :) I've been through them
>> many, many times, trying to get the connection working. I've gone from
>> error to error, and thought I had it all working when I finally got
>> the create_engine line to run with no problem. Apparently I'm not as
>> far along as I thought I was. Back to the drawing board.
>
> Hi Alex,
>
> I just want to reiterate my earlier suggestion – before you try to use
> any SQLAlchemy machinery at all, first try to create a connection from
> your Python runtime directly, using whichever DBAPI driver you want to
> use (most likely you want to create a ibm_db connection object -- do
> not import anything related to SQLAlchemy at this point, neither
> sqlalchemy, nor ibm_db_sa), make sure you are able to execute SQL
> statements using that, and only once you get this to work correctly,
> try to figure out how to make it work with SQLAlchemy.
>
> And, of course, you shouldn't try to get SQLAlchemy to work all at
> once either. First, create an Engine with a connection string, but do
> not try to run any fancy introspection or anything before you make
> sure that you can execute raw SQL queries using that engine. After you
> get *that* out of the way, you can start trying out more advanced
> features of SQLAlchemy.
>
> Baby steps, you know. Divide and conquer. Do not try to solve this
> entire huge problem all at once. (And yes, as you are probably aware
> by now, successfully connecting to an enterprise database server *is*
> a huge problem.) That way you'll avoid false leads like this one.
>
> Good luck!
>
> Michal
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
Great; I was hoping you wouldn't say that. :) I've been through them
many, many times, trying to get the connection working. I've gone from
error to error, and thought I had it all working when I finally got
the create_engine line to run with no problem. Apparently I'm not as
far along as I thought I was. Back to the drawing board.

To keep things on topic for this thread, let me pose a general
question. This database contains hundreds of tables, maybe thousands.
Some are small, a few have thousands or millions of rows. Would
automap choke on all that, or could it handle it? Will mapping all
that fill up my ram, or have any other impact I should consider?

On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> well then you're just not making any database connection.   you'd need
> to check your database connectivity and your connection parameters.
>
>
>
> On 02/16/2016 03:37 PM, Alex Hall wrote:
>> I tried that, hoping for a bit more insight into the problem. However,
>> unless I'm doing something wrong, I don't even get any queries. I get
>> my own print statements, then the script tries to connect and hangs.
>> I've added
>> dbEngine.connect()
>> just to be sure the problem is that first connection, and sure enough,
>> it hangs on that line.
>>
>> On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
>>> turning on echo=True inside create_engine() will show you what queries
>>> are emitted as they occur so you can see which ones are taking long
>>> and/or hanging.
>>>
>>>
>>> On 02/16/2016 02:59 PM, Alex Hall wrote:
>>>> Upon re-reading some of the docs, I realized that my problem may still
>>>> be that initial connection. The create-engine doesn't actually
>>>> *connect* to the database, it just sets things up. That means that my
>>>> actual connection happens later, when I try to reflect or use automap.
>>>> When that happens, the connection starts up and the script hangs. I'm
>>>> no closer to solving this, and would love to hear anyone's thoughts,
>>>> but at least I know that my thought of blaming reflect/automap is
>>>> likely incorrect.
>>>>
>>>> On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
>>>>> Hi list,
>>>>> Sorry for all the emails. I've determined that my script is actually
>>>>> connecting to the 400's test database. At least, a print statement
>>>>> placed just after the create_engine call is printing, so I guess we're
>>>>> good there.
>>>>>
>>>>> What I'm running into now is unresponsiveness when I try to reflect or
>>>>> automap the database so I can do some basic queries. As soon as I call
>>>>> either
>>>>> automap.prepare(dbEngine, reflect=True)
>>>>> or
>>>>> metadata = MetaData()
>>>>> metadata.reflect(dbEngine, only=['tableName'])
>>>>>
>>>>> the script stops, hanging there with no response at all. The same
>>>>> thing happened when I was trying to use an inspector on the engine.
>>>>> It's an AS400, so taking a few seconds is a very long time for it.
>>>>> This is being left to run for minutes and isn't doing anything. What,
>>>>> if anything did I do wrong syntactically? Is there a better way to
>>>>> check that my engine is actually ready to go, or some other check I
>>>>> should be making? The full script, minus anything sensitive, is below.
>>>>>
>>>>> import globals
>>>>> import logging
>>>>> from sqlalchemy import *
>>>>> from sqlalchemy.engine import reflection
>>>>> from sqlalchemy.ext.automap import automap_base
>>>>> from sqlalchemy.ext.declarative import declarative_base
>>>>> from sqlalchemy.orm import sessionmaker
>>>>>
>>>>> logger = logging.getLogger(globals.appName+"."+__name__)
>>>>>
>>>>> #set up the sqlalchemy objects
>>>>> logger.debug("Creating database engine, base, and session.")
>>>>> dbEngine =
>>>>> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
>>>>> print "connected"
>>>>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>>>>> Session
>>>>> session = Session() #lowercase s
>>>>> metadata = MetaData()
>>>>> logger.debug("Creating

Re: [sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
I tried that, hoping for a bit more insight into the problem. However,
unless I'm doing something wrong, I don't even get any queries. I get
my own print statements, then the script tries to connect and hangs.
I've added
dbEngine.connect()
just to be sure the problem is that first connection, and sure enough,
it hangs on that line.

On 2/16/16, Mike Bayer <clas...@zzzcomputing.com> wrote:
> turning on echo=True inside create_engine() will show you what queries
> are emitted as they occur so you can see which ones are taking long
> and/or hanging.
>
>
> On 02/16/2016 02:59 PM, Alex Hall wrote:
>> Upon re-reading some of the docs, I realized that my problem may still
>> be that initial connection. The create-engine doesn't actually
>> *connect* to the database, it just sets things up. That means that my
>> actual connection happens later, when I try to reflect or use automap.
>> When that happens, the connection starts up and the script hangs. I'm
>> no closer to solving this, and would love to hear anyone's thoughts,
>> but at least I know that my thought of blaming reflect/automap is
>> likely incorrect.
>>
>> On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
>>> Hi list,
>>> Sorry for all the emails. I've determined that my script is actually
>>> connecting to the 400's test database. At least, a print statement
>>> placed just after the create_engine call is printing, so I guess we're
>>> good there.
>>>
>>> What I'm running into now is unresponsiveness when I try to reflect or
>>> automap the database so I can do some basic queries. As soon as I call
>>> either
>>> automap.prepare(dbEngine, reflect=True)
>>> or
>>> metadata = MetaData()
>>> metadata.reflect(dbEngine, only=['tableName'])
>>>
>>> the script stops, hanging there with no response at all. The same
>>> thing happened when I was trying to use an inspector on the engine.
>>> It's an AS400, so taking a few seconds is a very long time for it.
>>> This is being left to run for minutes and isn't doing anything. What,
>>> if anything did I do wrong syntactically? Is there a better way to
>>> check that my engine is actually ready to go, or some other check I
>>> should be making? The full script, minus anything sensitive, is below.
>>>
>>> import globals
>>> import logging
>>> from sqlalchemy import *
>>> from sqlalchemy.engine import reflection
>>> from sqlalchemy.ext.automap import automap_base
>>> from sqlalchemy.ext.declarative import declarative_base
>>> from sqlalchemy.orm import sessionmaker
>>>
>>> logger = logging.getLogger(globals.appName+"."+__name__)
>>>
>>> #set up the sqlalchemy objects
>>> logger.debug("Creating database engine, base, and session.")
>>> dbEngine =
>>> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
>>> print "connected"
>>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>>> Session
>>> session = Session() #lowercase s
>>> metadata = MetaData()
>>> logger.debug("Creating session.")
>>> print "Creating automap base"
>>> base = automap_base()
>>> print "setting up automapping"
>>> #base.prepare(dbEngine, reflect=True)
>>> metadata.reflect(dbEngine, only=['tableName'])
>>>
>>> def getOrderByNumber(orderID):
>>>   orders = base.classes.ORHED
>>>   order = session.query(orders).filter(orders.OAORNO==orderID).first()
>>>   print order.OAORNO
>>> #end def getOrderByNumber
>>>
>>> getOrderByNumber("AA111")
>>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: reflection taking a very long time?

2016-02-16 Thread Alex Hall
Upon re-reading some of the docs, I realized that my problem may still
be that initial connection. The create-engine doesn't actually
*connect* to the database, it just sets things up. That means that my
actual connection happens later, when I try to reflect or use automap.
When that happens, the connection starts up and the script hangs. I'm
no closer to solving this, and would love to hear anyone's thoughts,
but at least I know that my thought of blaming reflect/automap is
likely incorrect.

On 2/16/16, Alex Hall <ah...@autodist.com> wrote:
> Hi list,
> Sorry for all the emails. I've determined that my script is actually
> connecting to the 400's test database. At least, a print statement
> placed just after the create_engine call is printing, so I guess we're
> good there.
>
> What I'm running into now is unresponsiveness when I try to reflect or
> automap the database so I can do some basic queries. As soon as I call
> either
> automap.prepare(dbEngine, reflect=True)
> or
> metadata = MetaData()
> metadata.reflect(dbEngine, only=['tableName'])
>
> the script stops, hanging there with no response at all. The same
> thing happened when I was trying to use an inspector on the engine.
> It's an AS400, so taking a few seconds is a very long time for it.
> This is being left to run for minutes and isn't doing anything. What,
> if anything did I do wrong syntactically? Is there a better way to
> check that my engine is actually ready to go, or some other check I
> should be making? The full script, minus anything sensitive, is below.
>
> import globals
> import logging
> from sqlalchemy import *
> from sqlalchemy.engine import reflection
> from sqlalchemy.ext.automap import automap_base
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import sessionmaker
>
> logger = logging.getLogger(globals.appName+"."+__name__)
>
> #set up the sqlalchemy objects
> logger.debug("Creating database engine, base, and session.")
> dbEngine =
> create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
> print "connected"
> Session = sessionmaker(bind = dbEngine) #note that's a capital s on Session
> session = Session() #lowercase s
> metadata = MetaData()
> logger.debug("Creating session.")
> print "Creating automap base"
> base = automap_base()
> print "setting up automapping"
> #base.prepare(dbEngine, reflect=True)
> metadata.reflect(dbEngine, only=['tableName'])
>
> def getOrderByNumber(orderID):
>  orders = base.classes.ORHED
>  order = session.query(orders).filter(orders.OAORNO==orderID).first()
>  print order.OAORNO
> #end def getOrderByNumber
>
> getOrderByNumber("AA111")
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] reflection taking a very long time?

2016-02-16 Thread Alex Hall
Hi list,
Sorry for all the emails. I've determined that my script is actually
connecting to the 400's test database. At least, a print statement
placed just after the create_engine call is printing, so I guess we're
good there.

What I'm running into now is unresponsiveness when I try to reflect or
automap the database so I can do some basic queries. As soon as I call
either
automap.prepare(dbEngine, reflect=True)
or
metadata = MetaData()
metadata.reflect(dbEngine, only=['tableName'])

the script stops, hanging there with no response at all. The same
thing happened when I was trying to use an inspector on the engine.
It's an AS400, so taking a few seconds is a very long time for it.
This is being left to run for minutes and isn't doing anything. What,
if anything did I do wrong syntactically? Is there a better way to
check that my engine is actually ready to go, or some other check I
should be making? The full script, minus anything sensitive, is below.

import globals
import logging
from sqlalchemy import *
from sqlalchemy.engine import reflection
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

logger = logging.getLogger(globals.appName+"."+__name__)

#set up the sqlalchemy objects
logger.debug("Creating database engine, base, and session.")
dbEngine = 
create_engine("ibm_db_sa://"+user+":"+pwd+"@"+server+":"+port+"/"+dbName)
print "connected"
Session = sessionmaker(bind = dbEngine) #note that's a capital s on Session
session = Session() #lowercase s
metadata = MetaData()
logger.debug("Creating session.")
print "Creating automap base"
base = automap_base()
print "setting up automapping"
#base.prepare(dbEngine, reflect=True)
metadata.reflect(dbEngine, only=['tableName'])

def getOrderByNumber(orderID):
 orders = base.classes.ORHED
 order = session.query(orders).filter(orders.OAORNO==orderID).first()
 print order.OAORNO
#end def getOrderByNumber

getOrderByNumber("AA111")

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-16 Thread Alex Hall
You're onto something, I think. When I use a connection string of
"ibm_db_sa://user:pwd@AS400IP:DBAccessPort/DBName"
I get no errors. I don't actually get anything, though; my command
prompt is unresponsive, as though waiting for a script to finish, but
that's all. This seems to be the 400, though, because if I try to
telnet into it with the same address and port, I get the same (lack
of) response. This is progress, at least. Thanks for the help, and
hopefully I'll have it from here! Here's to my next question being
about doing things once I'm IN the database, rather than still
knocking on its door.

On 2/16/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Tue, Feb 16, 2016 at 10:27:40AM -0500, Alex Hall wrote:
>> I have pyodbc 3.0.10, ibm_db_sa 0.3.2, and ibm_db 2.0.6. I'm also
>> talking to people on the ibm_db list, and they suggested I re-install
>> ibm_db and ibm_db_sa according to the official tutorial, which uses
>> easy_install. I did so, but there was no change.
>>
>> As to pyodbc, I'm fine with not using it. Thus far, from the two lists
>> I'm on and more research, I thought I *had to* use it to get things
>> working right. Indeed, when I remove "+pyodbc" from my SA connection
>> string, the dbms_ver error goes away. However, it's replaced by an
>> error that the driver can't find the DSN name I give it, even though I
>> can see that DSN right in the IBM ODBC manager on this computer.
>> Someone mentioned 64-bit versus 32-bit; I'm using the 64-bit version
>> of the ODBC manager, and 64-bit Python. I'm not sure how else to tell
>> if the name of the DSN itself is in the correct format.
>>
>> The traceback is very long, but here it is in full:
>>
>> c:\python27\python.exe DBInterface2.py
>> Traceback (most recent call last):
> [...]
>>   File
>> "c:\python27\lib\site-packages\ibm_db_sa-0.3.2-py2.7.egg\ibm_db_sa\base.p
>> y", line 666, in initialize
>> self.dbms_ver = connection.connection.dbms_ver
>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>
> This traceback is still the dbms_ver thing, did you mean to post the
> other one?
>
> In any case, when you're using a URI in the form of
> "ibm_db_sa://user:pass@host/db_name", at least based on the example in
> the IBM docs [1], I'm guessing that you shouldn't use the ODBC DSN you
> have defined, but rather the server hostname or IP address directly.
> In this case it should be using the IBM DBAPI driver directly, without
> going through ODBC.
>
> Cheers,
>
> Michal
>
>
> [1]:
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.swg.im.dbclient.python.doc/doc/t0060891.html
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-16 Thread Alex Hall
I have pyodbc 3.0.10, ibm_db_sa 0.3.2, and ibm_db 2.0.6. I'm also
talking to people on the ibm_db list, and they suggested I re-install
ibm_db and ibm_db_sa according to the official tutorial, which uses
easy_install. I did so, but there was no change.

As to pyodbc, I'm fine with not using it. Thus far, from the two lists
I'm on and more research, I thought I *had to* use it to get things
working right. Indeed, when I remove "+pyodbc" from my SA connection
string, the dbms_ver error goes away. However, it's replaced by an
error that the driver can't find the DSN name I give it, even though I
can see that DSN right in the IBM ODBC manager on this computer.
Someone mentioned 64-bit versus 32-bit; I'm using the 64-bit version
of the ODBC manager, and 64-bit Python. I'm not sure how else to tell
if the name of the DSN itself is in the correct format.

The traceback is very long, but here it is in full:

c:\python27\python.exe DBInterface2.py
Traceback (most recent call last):
  File "DBInterface2.py", line 28, in 
getAllTables()
  File "DBInterface2.py", line 22, in getAllTables
dbInspector = reflection.Inspector.from_engine(dbEngine)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 135
, in from_engine
return Inspector(bind)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 109
, in __init__
bind.connect().close()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2018, in
connect
return self._connection_cls(self, **kwargs)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 72, in __
init__
if connection is not None else engine.raw_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2104, in
raw_connection
self.pool.unique_connection, _connection)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2074, in
_wrap_pool_connect
return fn()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 318, in unique_c
onnection
return _ConnectionFairy._checkout(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 713, in _checkou
t
fairy = _ConnectionRecord.checkout(pool)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 480, in checkout

rec = pool._do_get()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1060, in _do_get

self._dec_overflow()
  File "c:\python27\lib\site-packages\sqlalchemy\util\langhelpers.py", line 60,
in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1057, in _do_get

return self._create_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 323, in _create_
connection
return _ConnectionRecord(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 454, in __init__

exec_once(self.connection, self)
  File "c:\python27\lib\site-packages\sqlalchemy\event\attr.py", line 246, in ex
ec_once
self(*args, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\event\attr.py", line 256, in __
call__
fn(*args, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\util\langhelpers.py", line 1312
, in go
return once_fn(*arg, **kw)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\strategies.py", line 165
, in first_connect
dialect.initialize(c)
  File "c:\python27\lib\site-packages\sqlalchemy\connectors\pyodbc.py", line 154
, in initialize
super(PyODBCConnector, self).initialize(connection)
  File "c:\python27\lib\site-packages\ibm_db_sa-0.3.2-py2.7.egg\ibm_db_sa\base.p
y", line 666, in initialize
self.dbms_ver = connection.connection.dbms_ver
AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'


On 2/15/16, Simon King <si...@simonking.org.uk> wrote:
> What does the traceback say? That exact line would trigger an error much
> like the one you are seeing, if the object in “connection.connection” is a
> pyodbc.Connection and doesn’t have a “dbms_ver” attribute.
>
> Note that there are at least 3 packages that could be involved here:
>
> pyodbc (https://pypi.python.org/pypi/pyodbc)
>
> ibm_db (https://pypi.python.org/pypi/ibm_db/)
>
> ibm_db_sa (https://pypi.python.org/pypi/ibm_db_sa)
>
> What versions do you have of each of them? Note that
> https://github.com/ibmdb/python-ibmdbsa/tree/master/ibm_db_sa says that
> pyodbc support is experimental.
>
> Simon
>
>> On 15 Feb 2016, at 21:07, Alex Hall <ah...@autodist.com> wrote:
>>
>> I just downloaded a fresh copy of 0.3.2, just in case I had somehow
>> gotten an old version from Pip. I looked in base.py, and found:
>>
&

Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
I just downloaded a fresh copy of 0.3.2, just in case I had somehow
gotten an old version from Pip. I looked in base.py, and found:

def initialize(self, connection):
   super(DB2Dialect, self).initialize(connection)
self.dbms_ver = connection.connection.dbms_ver

While I'm not sure what I can do about it, it looks like this dbms_ver
property is definitely in the latest ibm_db_sa version. Am I getting
this from the wrong place, or confusing this with a different package
somehow? I *must* be missing something obvious.

On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
> An interesting development. I noticed that in site-packages\ibm_db_sa
> was pyodbc.py. Thinking that might be an older version, I renamed it,
> trying to force the import to use my installed version instead. It now
> says "cannot import name pyodbc". I thought Python searched the
> current directory, then the site-packages one, for modules? If so, and
> if I can import pyodbc with no errors in the shell, why would
> ibm_db_sa fail to import? This may be the problem--it was using an
> older version of pyodbc and can't find the newer one for some reason.
> Any ideas, or am I completely off track with this?
>
> On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
>> Thanks guys. I've checked the version I'm using, and it reports that
>> ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
>> installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
>> Is the former package causing a conflict somehow?
>>
>> On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
>>> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the
>>> previous
>>> version. dbms_ver is a feature specific of native ibm_db version of
>>> which
>>> not available in pyodbc.
>>>
>>> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>>>
>>>
>>> Salam,
>>>
>>> -Jaimy.
>>>
>>>
>>> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>>>
>>>> Hello list,
>>>> I've configured a DSN to a test version of my work's AS400 and I seem
>>>> to be able to connect just fine (Yes!) I'm now running into a problem
>>>> when I try to ask for a list of all tables. The line is:
>>>>
>>>>  dbInspector = inspect(dbEngine)
>>>>
>>>> The traceback is very long, and I can paste it if you want, but it
>>>> ends with this:
>>>>
>>>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>>>
>>>> I'm unable to find anything about this online, so thought I'd check
>>>> with this list. Here's my connection:
>>>>
>>>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>>>
>>>> If anyone knows what is causing this, I'd appreciate your thoughts.
>>>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>>>> have all the latest versions of everything. I'm on Windows 7x64,
>>>> Python 2.7 (latest).
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups
>>>> "sqlalchemy" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an
>>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
An interesting development. I noticed that in site-packages\ibm_db_sa
was pyodbc.py. Thinking that might be an older version, I renamed it,
trying to force the import to use my installed version instead. It now
says "cannot import name pyodbc". I thought Python searched the
current directory, then the site-packages one, for modules? If so, and
if I can import pyodbc with no errors in the shell, why would
ibm_db_sa fail to import? This may be the problem--it was using an
older version of pyodbc and can't find the newer one for some reason.
Any ideas, or am I completely off track with this?

On 2/15/16, Alex Hall <ah...@autodist.com> wrote:
> Thanks guys. I've checked the version I'm using, and it reports that
> ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
> installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
> Is the former package causing a conflict somehow?
>
> On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
>> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the previous
>> version. dbms_ver is a feature specific of native ibm_db version of which
>> not available in pyodbc.
>>
>> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>>
>>
>> Salam,
>>
>> -Jaimy.
>>
>>
>> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>>
>>> Hello list,
>>> I've configured a DSN to a test version of my work's AS400 and I seem
>>> to be able to connect just fine (Yes!) I'm now running into a problem
>>> when I try to ask for a list of all tables. The line is:
>>>
>>>  dbInspector = inspect(dbEngine)
>>>
>>> The traceback is very long, and I can paste it if you want, but it
>>> ends with this:
>>>
>>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>>
>>> I'm unable to find anything about this online, so thought I'd check
>>> with this list. Here's my connection:
>>>
>>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>>
>>> If anyone knows what is causing this, I'd appreciate your thoughts.
>>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>>> have all the latest versions of everything. I'm on Windows 7x64,
>>> Python 2.7 (latest).
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-15 Thread Alex Hall
Thanks guys. I've checked the version I'm using, and it reports that
ibm_db_sa.__version__ is '0.3.2'. I have both ibm_db_sa and ibm_db
installed. Should I remove ibm_db and rely only on ibm_db_sa instead?
Is the former package causing a conflict somehow?

On 2/15/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
> Try to use ibm_db_sa 0.3.2 instead, apparently you are using the previous
> version. dbms_ver is a feature specific of native ibm_db version of which
> not available in pyodbc.
>
> https://pypi.python.org/pypi/ibm_db_sa/0.3.2
>
>
> Salam,
>
> -Jaimy.
>
>
> On Feb 12, 2016 22:05, "Alex Hall" <ah...@autodist.com> wrote:
>
>> Hello list,
>> I've configured a DSN to a test version of my work's AS400 and I seem
>> to be able to connect just fine (Yes!) I'm now running into a problem
>> when I try to ask for a list of all tables. The line is:
>>
>>  dbInspector = inspect(dbEngine)
>>
>> The traceback is very long, and I can paste it if you want, but it
>> ends with this:
>>
>> AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'
>>
>> I'm unable to find anything about this online, so thought I'd check
>> with this list. Here's my connection:
>>
>> dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")
>>
>> If anyone knows what is causing this, I'd appreciate your thoughts.
>> I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
>> have all the latest versions of everything. I'm on Windows 7x64,
>> Python 2.7 (latest).
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Pyodbc.Connection has no attribute 'dbms_ver'?

2016-02-12 Thread Alex Hall
Hello list,
I've configured a DSN to a test version of my work's AS400 and I seem
to be able to connect just fine (Yes!) I'm now running into a problem
when I try to ask for a list of all tables. The line is:

 dbInspector = inspect(dbEngine)

The traceback is very long, and I can paste it if you want, but it
ends with this:

AttributeError: 'pyodbc.Connection' object has no attribute 'dbms_ver'

I'm unable to find anything about this online, so thought I'd check
with this list. Here's my connection:

dbEngine = create_engine("ibm_db_sa+pyodbc://user:pwd@myDSN")

If anyone knows what is causing this, I'd appreciate your thoughts.
I've installed pyodbc, ibm_db, and ibm_db_sa through pip, so I should
have all the latest versions of everything. I'm on Windows 7x64,
Python 2.7 (latest).

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-12 Thread Alex Hall
Thanks so much for your reply--this really helps! I asked the people
at work, and was told that my machine does, in fact, have some sort of
IBM manager installed. (Can you tell I'm new to this technology and
this job?) Using it, I was able to create a DSN to the test database
and, it seems, connect. I'm getting an error when I call

 dbInspector = inspect(dbEngine)

but at least I'm getting that far. I'll ask about the error in a
separate thread, since more people are likely to have run across that
than seem to have experience with the 400 and IBM's wrapper.

On 2/12/16, Michal Petrucha <michal.petru...@konk.org> wrote:
> On Thu, Feb 11, 2016 at 01:16:03PM -0500, Alex Hall wrote:
>> I've done more research on this topic. There's a lot out there about
>> using MSSQL with SA, but next to nothing about using ibm_db_sa or
>> specifying drivers.
>>
>> I have pyodbc installed. I downloaded IBM's ODBC zip file, and I've
>> put db2odbc64.dll in my project folder, but don't know how to point SA
>> or pyodbc to it. I've tried several versions of
>> "?driver="db2odbc64.dll"" appended to my connection string, but I keep
>> getting an error: "data source not found and no default driver
>> specified". It doesn't even time out anymore, it just errors out
>> immediately. I've also tried "ibm_db_sa+pyodbc://" to start the
>> string, but that fails too.
>>
>> This *must* be a simple thing, but I can't work out what to do, and
>> Google is failing me. If anyone has any ideas, I'd greatly appreciate
>> hearing them. Thanks, and sorry to keep bugging the list about this. I
>> just have no other options at the moment and I need to get this
>> working soon.
>
> Hi Alex,
>
> Unfortunately, I can't offer you any specific help with IBM DB, but
> judging by the number of replies, it seems nobody on this list can, so
> I only have some stab-in-the-dark suggestions.
>
> In my experience with enterprise software, *nothing* is ever a simple
> thing, not even seemingly trivial operations, such as connecting to a
> database.
>
> You can try using either pyodbc, or the ibm_db driver – in both cases,
> those are just the Python DBAPI drivers which take in textual SQL
> statements, send them to the database in the low-level network
> protocol, and present the results as dumb Python objects. SQLAlchemy
> is a layer on top of them. That means, the first step would be to get
> your Python runtime to open a raw pyodbc, or ibm_db connection to the
> server, and be able to execute raw SQL statements there. Only after
> you confirm this works you can move on to getting SQLAlchemy to work
> with the DBAPI driver.
>
>
> In my understanding, pyodbc is a wrapper around the library unixodbc.
> I'm not sure how it's implemented on Windows – whether it's a port of
> unixodbc, or it uses a different ODBC implementation there. Whatever
> the case, though, on Linux with unixodbc, when I wanted to connect to
> MS SQL, I had to register a low-level driver with the unixodbc
> library. I had to edit a system-wide configuration file
> (/etc/unixODBC/odbcinst.ini), and create a new driver definition in
> there to make unixodbc recognize the FreeTDS driver I'm using as the
> low-level protocol implementation.
>
> I have no idea what low-level ODBC driver is required to connect to
> IBM DB, I'm afraid you'll have to figure that out on your own. The
> official IBM docs at
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/c0007944.html?cp=SSEPGG_9.7.0%2F4-0-4
> seem to imply that IBM provides their own low-level ODBC driver which
> you'll need to have in place in order to be able to connect to the
> server using ODBC.
>
> In any case, I would expect that the ODBC machinery would expect to
> have the db2odbc64.dll registered somehow with a symbolic name in some
> configuration file, registry, or whatever, and that would be the
> string you're expected to pass as the driver name in the ODBC
> connection string.
>
> Actually, I think with ODBC, you're expected to define all database
> servers in a system-wide configuration file or some such, give each
> one of them a nickname (“DSN”), and just use that to connect to the
> database.
>
>
> The other option is to use the ibm_db Python DBAPI driver. I expect
> you have already seen the official docs:
> https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.swg.im.dbclient.python.doc/doc/c0054366.html
> Have you tried following the set-up steps in that section there? Try
> to first get it into a state where you can connect to the database
> with ``ibm_db.connect()``, and successfully execute SQL statements
> from the Python shell

Re: [sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-11 Thread Alex Hall
I think I'm confused. Isn't Pyodbc an alternative to SQLAlchemy? If
not, how would the two work together? I just looked through the
'Getting Started' and 'API' docs for Pyodbc, and I don't see any
examples. I found some samples online of people using the two
together, but I don't quite follow how the process works. Thanks.

On 2/10/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
> Connecting to AS400 from native ibm_db_dbi driver would need db2 connect
> which is a separated product from IBM. Use the ibm_db_sa pyodbc driver
> instead, or jdbc (jython) if you don't have db2 connect installed on your
> machine.
>
> Salam,
>
> -Jaimy
> On Feb 11, 2016 01:50, "Alex Hall" <ah...@autodist.com> wrote:
>
>> Hello list,
>> I sent this to the ibm_db list yesterday, but no one has responded
>> yet. Since it's as much ibm_db as SA, I thought I'd try here as well
>> in case any of you have used an AS400 before. I have ibm_db,
>> ibm_db_sa, the latest sqlalchemy, and Python 2.7 (latest) installed. I
>> can talk to SQLite with no trouble, it's talking to this 400 that
>> won't work. Anyway...
>>
>> I'm finally ready to hook my app to the 400 instead of the local
>> SQLite database I've been using for testing. Here's my simple script:
>>
>> import globals
>> import logging
>> from sqlalchemy import *
>> from sqlalchemy.ext.declarative import declarative_base
>> from sqlalchemy.orm import sessionmaker
>>
>> #set up the sqlalchemy objects
>> dbEngine = create_engine('ibm_db_sa://
>> username:passw...@mysite.com:8471/database')
>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>> Session
>> session = Session() #lowercase s
>> base = declarative_base()
>>
>> def getAllTables():
>>  dbInspector = inspect(dbEngine)
>>  for table in dbInspector.get_table_names():
>>   print table
>>
>> getAllTables()
>>
>> When I run that, it waits thirty seconds or so, then tells me there
>> was an error. I'll paste the entire traceback below. Sorry in
>> advance--it's pretty long.
>>
>> Microsoft Windows [Version 6.1.7601]
>> Copyright (c) 2009 Microsoft Corporation.  All rights reserved.
>>
>> C:\Users\admin\Dropbox\Autodist\jobs>c:\python27\python.exe
>> DBInterface2.py
>> Traceback (most recent call last):
>>   File "DBInterface2.py", line 24, in 
>> getAllTables()
>>   File "DBInterface2.py", line 18, in getAllTables
>> dbInspector = inspect(dbEngine)
>>   File "c:\python27\lib\site-packages\sqlalchemy\inspection.py", line 63,
>> in ins
>> pect
>> ret = reg(subject)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 139
>> , in _insp
>> return Inspector.from_engine(bind)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 135
>> , in from_engine
>> return Inspector(bind)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>> line 109
>> , in __init__
>> bind.connect().close()
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2018, in
>> connect
>> return self._connection_cls(self, **kwargs)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 72,
>> in __
>> init__
>> if connection is not None else engine.raw_connection()
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2104, in
>> raw_connection
>> self.pool.unique_connection, _connection)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2078, in
>> _wrap_pool_connect
>> e, dialect, self)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 1405, in
>> _handle_dbapi_exception_noconnection
>> exc_info
>>   File "c:\python27\lib\site-packages\sqlalchemy\util\compat.py", line
>> 200, in r
>> aise_from_cause
>> reraise(type(exception), exception, tb=exc_tb)
>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>> 2074, in
>> _wrap_pool_connect
>> return fn()
>>   File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 318, in
>> unique_c
>> onnection
>> return _ConnectionFairy._checkout(self)
>>   File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 713, in
>> _checkou
>> 

Re: [sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-11 Thread Alex Hall
I've done more research on this topic. There's a lot out there about
using MSSQL with SA, but next to nothing about using ibm_db_sa or
specifying drivers.

I have pyodbc installed. I downloaded IBM's ODBC zip file, and I've
put db2odbc64.dll in my project folder, but don't know how to point SA
or pyodbc to it. I've tried several versions of
"?driver="db2odbc64.dll"" appended to my connection string, but I keep
getting an error: "data source not found and no default driver
specified". It doesn't even time out anymore, it just errors out
immediately. I've also tried "ibm_db_sa+pyodbc://" to start the
string, but that fails too.

This *must* be a simple thing, but I can't work out what to do, and
Google is failing me. If anyone has any ideas, I'd greatly appreciate
hearing them. Thanks, and sorry to keep bugging the list about this. I
just have no other options at the moment and I need to get this
working soon.

On 2/11/16, Alex Hall <ah...@autodist.com> wrote:
> I think I'm confused. Isn't Pyodbc an alternative to SQLAlchemy? If
> not, how would the two work together? I just looked through the
> 'Getting Started' and 'API' docs for Pyodbc, and I don't see any
> examples. I found some samples online of people using the two
> together, but I don't quite follow how the process works. Thanks.
>
> On 2/10/16, Jaimy Azle <jaimy.a...@gmail.com> wrote:
>> Connecting to AS400 from native ibm_db_dbi driver would need db2 connect
>> which is a separated product from IBM. Use the ibm_db_sa pyodbc driver
>> instead, or jdbc (jython) if you don't have db2 connect installed on your
>> machine.
>>
>> Salam,
>>
>> -Jaimy
>> On Feb 11, 2016 01:50, "Alex Hall" <ah...@autodist.com> wrote:
>>
>>> Hello list,
>>> I sent this to the ibm_db list yesterday, but no one has responded
>>> yet. Since it's as much ibm_db as SA, I thought I'd try here as well
>>> in case any of you have used an AS400 before. I have ibm_db,
>>> ibm_db_sa, the latest sqlalchemy, and Python 2.7 (latest) installed. I
>>> can talk to SQLite with no trouble, it's talking to this 400 that
>>> won't work. Anyway...
>>>
>>> I'm finally ready to hook my app to the 400 instead of the local
>>> SQLite database I've been using for testing. Here's my simple script:
>>>
>>> import globals
>>> import logging
>>> from sqlalchemy import *
>>> from sqlalchemy.ext.declarative import declarative_base
>>> from sqlalchemy.orm import sessionmaker
>>>
>>> #set up the sqlalchemy objects
>>> dbEngine = create_engine('ibm_db_sa://
>>> username:passw...@mysite.com:8471/database')
>>> Session = sessionmaker(bind = dbEngine) #note that's a capital s on
>>> Session
>>> session = Session() #lowercase s
>>> base = declarative_base()
>>>
>>> def getAllTables():
>>>  dbInspector = inspect(dbEngine)
>>>  for table in dbInspector.get_table_names():
>>>   print table
>>>
>>> getAllTables()
>>>
>>> When I run that, it waits thirty seconds or so, then tells me there
>>> was an error. I'll paste the entire traceback below. Sorry in
>>> advance--it's pretty long.
>>>
>>> Microsoft Windows [Version 6.1.7601]
>>> Copyright (c) 2009 Microsoft Corporation.  All rights reserved.
>>>
>>> C:\Users\admin\Dropbox\Autodist\jobs>c:\python27\python.exe
>>> DBInterface2.py
>>> Traceback (most recent call last):
>>>   File "DBInterface2.py", line 24, in 
>>> getAllTables()
>>>   File "DBInterface2.py", line 18, in getAllTables
>>> dbInspector = inspect(dbEngine)
>>>   File "c:\python27\lib\site-packages\sqlalchemy\inspection.py", line
>>> 63,
>>> in ins
>>> pect
>>> ret = reg(subject)
>>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>>> line 139
>>> , in _insp
>>> return Inspector.from_engine(bind)
>>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>>> line 135
>>> , in from_engine
>>> return Inspector(bind)
>>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py",
>>> line 109
>>> , in __init__
>>> bind.connect().close()
>>>   File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line
>>> 2018, in
>>> connect
>>> return self._connection_cls(self, **kwargs)
>>>   File "c:\

[sqlalchemy] Connecting to AS400 with SQLAlchemy fails

2016-02-10 Thread Alex Hall
Hello list,
I sent this to the ibm_db list yesterday, but no one has responded
yet. Since it's as much ibm_db as SA, I thought I'd try here as well
in case any of you have used an AS400 before. I have ibm_db,
ibm_db_sa, the latest sqlalchemy, and Python 2.7 (latest) installed. I
can talk to SQLite with no trouble, it's talking to this 400 that
won't work. Anyway...

I'm finally ready to hook my app to the 400 instead of the local
SQLite database I've been using for testing. Here's my simple script:

import globals
import logging
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

#set up the sqlalchemy objects
dbEngine = 
create_engine('ibm_db_sa://username:passw...@mysite.com:8471/database')
Session = sessionmaker(bind = dbEngine) #note that's a capital s on Session
session = Session() #lowercase s
base = declarative_base()

def getAllTables():
 dbInspector = inspect(dbEngine)
 for table in dbInspector.get_table_names():
  print table

getAllTables()

When I run that, it waits thirty seconds or so, then tells me there
was an error. I'll paste the entire traceback below. Sorry in
advance--it's pretty long.

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\admin\Dropbox\Autodist\jobs>c:\python27\python.exe DBInterface2.py
Traceback (most recent call last):
  File "DBInterface2.py", line 24, in 
    getAllTables()
  File "DBInterface2.py", line 18, in getAllTables
    dbInspector = inspect(dbEngine)
  File "c:\python27\lib\site-packages\sqlalchemy\inspection.py", line 63, in ins
pect
    ret = reg(subject)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 139
, in _insp
    return Inspector.from_engine(bind)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 135
, in from_engine
    return Inspector(bind)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\reflection.py", line 109
, in __init__
    bind.connect().close()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2018, in
connect
    return self._connection_cls(self, **kwargs)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 72, in __
init__
    if connection is not None else engine.raw_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2104, in
raw_connection
    self.pool.unique_connection, _connection)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2078, in
_wrap_pool_connect
    e, dialect, self)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 1405, in
_handle_dbapi_exception_noconnection
    exc_info
  File "c:\python27\lib\site-packages\sqlalchemy\util\compat.py", line 200, in r
aise_from_cause
    reraise(type(exception), exception, tb=exc_tb)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\base.py", line 2074, in
_wrap_pool_connect
    return fn()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 318, in unique_c
onnection
    return _ConnectionFairy._checkout(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 713, in _checkou
t
    fairy = _ConnectionRecord.checkout(pool)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 480, in checkout

    rec = pool._do_get()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1060, in _do_get

    self._dec_overflow()
  File "c:\python27\lib\site-packages\sqlalchemy\util\langhelpers.py", line 60,
in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 1057, in _do_get

    return self._create_connection()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 323, in _create_
connection
    return _ConnectionRecord(self)
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 449, in __init__

    self.connection = self.__connect()
  File "c:\python27\lib\site-packages\sqlalchemy\pool.py", line 607, in __connec
t
    connection = self.__pool._invoke_creator(self)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\strategies.py", line 97,
 in connect
    return dialect.connect(*cargs, **cparams)
  File "c:\python27\lib\site-packages\sqlalchemy\engine\default.py", line 385, i
n connect
    return self.dbapi.connect(*cargs, **cparams)
  File "c:\python27\lib\site-packages\ibm_db-2.0.6-py2.7.egg\ibm_db_dbi.py", lin
e 588, in connect
    raise _get_exception(inst)
sqlalchemy.exc.OperationalError: (ibm_db_dbi.OperationalError) ibm_db_dbi::Opera
tionalError: [IBM][CLI Driver] SQL30081N  A communication error has been detecte
d. Communication protocol being used: "TCP/IP".  Communication API being used: "
SOCKETS".  Location where the error was detected: "[IP of the 400
server]".  Communicatio
n function detecting the error: "connect".  Protocol specific error code(s): "10
060", "*", "*".  SQLSTATE=08001\r SQLCODE=-30081

(I took out the actual IP). 

Re: [sqlalchemy] Recommended way to delete record given instance?

2016-02-08 Thread Alex Hall
Thanks for the suggestion, that seems to work perfectly! I'm not sure
why the docs made deletions sound more complex than that. Of course, I
may have just missed this.

On 2/6/16, Thierry Florac <tflo...@gmail.com> wrote:
> Hi !
>
> Did you try "session.delete(myrecord)" ?
>
> Best regards,
> Thierry
>
>
> 2016-02-06 5:16 GMT+01:00 Alex Hall <ah...@autodist.com>:
>
>> Hello all,
>> Another basic question: given an instance of a record, can I somehow
>> delete the record from a table? The longer story is this.
>>
>> I have my app, which lists tables on the left and the selected table's
>> rows on the right. Adding and editing rows both now work correctly
>> (though
>> I have yet to do serious validation--thanks for all your thoughts on
>> that,
>> by the way). Now I just need to get deletion working. However, because
>> the
>> user could be looking at any table, I can't do
>> table.query.filter(id==x).delete()
>> because I can't know what field to use as the search field in that query.
>> One table might have an ID field, another something different. I do,
>> however, have the entire object representing the row selected for
>> deletion.
>> Could I somehow issue a delete statement using that whole object, and
>> know
>> that this will work for any table? I hope I'm making sense--it's been a
>> long day. Thanks!
>>
>>
>> --
>> Alex Hall
>> Automatic Distributors, IT Department
>> ah...@autodist.com
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> http://www.imagesdusport.com -- http://www.ztfy.org
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Using ibm_db to talk to AS400?

2016-02-08 Thread Alex Hall
Hi list,
Thus far, I've been using very basic database actions on a local
SQLite database as I've written the application. Now, though, I have
my GUI able to drive all CRUD operations, and I'm nearing the time
when I'll switch over to the actual system for which I've written the
app: the company's AS400, over a network.

I've installed ibm_db, but I also found something called ibm_db_sa,
claiming to be a package specifically for SQLAlchemy:
https://pypi.python.org/pypi/ibm_db_sa

The problem is, when I unpacked the file, it was just the ibm_db
package I'd already installed. I found an older version of ibm_db_sa,
but it was for Python 2.5 and is from 2013. What, if anything, more do
I need to get SQLAlchemy talking to a DB2 database than the ibm_db
package?

I'll also be talking to the database over a network; will that be a
problem? Right now it's within the same building, but eventually this
app will run on a virtual server hosted by Google and will still need
to be in constant contact with the 400. What should I know, if
anything, about remote servers with SQLAlchemy? Thanks in advance for
any help. I know DB2 isn't as popular as Oracle or MySQL, but I'm
hoping a few people here have used it.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Best practice for restricting input to columns?

2016-02-05 Thread Alex Hall
Hi all,
What's the recommended way to restrict input? For instance, I store a
phone number as an integer, and I need it to have 7, 10, or 11 digits.
In its getter, I format it so it looks good as a string, and in its
setter, I take the string the user inputs, strip only the integers,
and store those.

To inforce my length restriction, I have it set up something like this:

class Customer(base):
 _phone = Column("phone", Integer)

 @property
 def phone(self):
  #return pretty string

 @phone.setter
 def phone(self, value):
  #intsInPhone is a list of the integers in "value"
  if len(intsInPhone) not in [7, 10, 11]: #invalid length
   raise ValueError, "Phone numbers must be 7, 10, or 11 digits"

Is there a way I should do this instead? I had to make properties
anyway, since user-inputted values are coming from a dialog and will
always be strings--I had to convert them to the proper types, and
output formatted strings in some cases. I figured this was a good
place for a little validation while I was at it.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Recommended way to delete record given instance?

2016-02-05 Thread Alex Hall
Hello all,
Another basic question: given an instance of a record, can I somehow delete the 
record from a table? The longer story is this.

I have my app, which lists tables on the left and the selected table's rows on 
the right. Adding and editing rows both now work correctly (though I have yet 
to do serious validation--thanks for all your thoughts on that, by the way). 
Now I just need to get deletion working. However, because the user could be 
looking at any table, I can't do
table.query.filter(id==x).delete()
because I can't know what field to use as the search field in that query. One 
table might have an ID field, another something different. I do, however, have 
the entire object representing the row selected for deletion. Could I somehow 
issue a delete statement using that whole object, and know that this will work 
for any table? I hope I'm making sense--it's been a long day. Thanks!


--
Alex Hall
Automatic Distributors, IT Department
ah...@autodist.com

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modifying records across multiple modules?

2016-02-04 Thread Alex Hall
Yes, I'm using Declarative. I was following a great tutorial, then
realized it was way out of date. The one recommended on the forum I
found said to use Declarative, so I did.

In DBInterface, I have this little function:

def getAllTables():
 return base.metadata.tables
#end def getAllTables

I then loop over that array, grabbing the value only and ignoring the
key. I just tried printing the type, as in:

def getAllTables():
 for table in base.metadata.tables.values():
  print type(table)
 return base.metadata.tables
#end def getAllTables

I got  Here's the loop that
generates the table list passed to my GUI (remember that this must be
an array of arrays):

  tables = DBInterface.getAllTables()
  tablesList = []
  for table in tables.values():
   tablesList.append([table])
  #end for

Just to be sure, I stuck a "print type(table)" statement in that for
loop, and the objects are still sqlalchemy.sql.schema.Table objects.

On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
> SQLAlchemy has 2 main layers, the "Core" layer which deals with mostly
> database-level constructs, such as Tables, and the ORM layer, which is
> built on top of Core. With the ORM, you map your own classes onto the
> lower-level Tables, then work with instances of those classes. You appear
> to be using the declarative system to define your classes, so SQLAlchemy
> will be creating the associated Table instances for you under the hood.
>
> In your example, Customer is an ORM-level class. Somewhere, there will be a
> construct representing the Core-level Table that the Customer class is
> mapped to. (It's probably available at Customer.__table__, but also in
> other places).
>
> When you say that "self.choices" contains table objects, I suspect that
> those are the Core-level Table instances. When you query using those, you
> don't get Customer objects back. You need to query using the Customer class
> instead. Can you get the Customer class into your self.choices array,
> either instead of the table, or as well as? Where are you getting the
> object that you are putting in the first element of each of the
> self.choices list?
>
> Simon
>
> On Thu, Feb 4, 2016 at 11:34 AM, Alex Hall <ah...@autodist.com> wrote:
>
>> It's all from a GUI, so it's something like this (my code isn't in front
>> of me):
>> DBInterface.session"query(self.choices[self.selectedIndex][0]).all()
>> Choices is a 2D array where the first (0th) element of each sub-array is
>> a
>> table object. The query works, because I get the records as expected and
>> can display them or inspect them. I just can't modify them for some
>> reason.
>> As I said, though, I'm new to this package so am likely missing an
>> obvious
>> step, or have something set up very wrong.
>>
>> Sent from my iPhone
>>
>> > On Feb 3, 2016, at 16:18, Simon King <si...@simonking.org.uk> wrote:
>> >
>> > OK, so you’re not actually getting Customer objects back from your
>> query. What does your call to session.query() look like? For this to
>> work,
>> it really ought to be something like “session.query(Customer)”. I suspect
>> you are doing something like “session.query(Customer.id, Customer.name,
>> …)”
>> instead.
>> >
>> > Simon
>> >
>> >> On 3 Feb 2016, at 17:43, Alex Hall <ah...@autodist.com> wrote:
>> >>
>> >> I'm on the Gmail site, so am not sure I can reply in-line. Sorry.
>> >>
>> >> This is a basic table class, like
>> >> class Customer(base):
>> >> __tablename__ = "customers"
>> >> name = Column(String(50)),
>> >> ...
>> >>
>> >> When I print the type:
>> >> 
>> >> And repr():
>> >> (2, u'Powersports Etc', 5554443210L, u'ahall+dbte...@autodist.com',
>> True)
>> >>
>> >>
>> >>> On 2/3/16, Simon King <si...@simonking.org.uk> wrote:
>> >>>> On Wed, Feb 3, 2016 at 3:54 PM, Alex Hall <ah...@autodist.com>
>> >>>> wrote:
>> >>>>
>> >>>> Hello list,
>> >>>> I'm new to SQLAlchemy, but not to Python. I have an application
>> >>>> that's
>> >>>> coming together, and relies on SQLAlchemy to talk to a database for
>> >>>> many of the app's functions. Listing tables, listing records,
>> >>>> updating
>> >>>> records, pulling records for internal use, and so on.
>> >>>>
>> >>>> My app is working, but so far I've been writing the framework and
>> >>

Re: [sqlalchemy] Itterating over database row?

2016-02-04 Thread Alex Hall
Ah, getattr()… I forgot that one! It's a bit messy, but my tables now fill 
correctly:
recordData = getattr(record, table.columns.keys()[i].__str__()).__str__()

Maybe one day I'll make my table schemas (is that the right term for them?) 
itterable, but this works for now. Thanks for the help. I assumed subclasses of 
base would be iterable so one could iterate over individual rows, but now I 
know they aren't, it'll be easy enough to add that in the future as I get more 
complex tables.
> On Feb 4, 2016, at 18:51, Simon King <si...@simonking.org.uk> wrote:
> 
> 
>> On 4 Feb 2016, at 18:19, Alex Hall <ah...@autodist.com> wrote:
>> 
>> Hello all,
>> I'm setting my application up the way Simon suggested. I still use the
>> table object so I can get its name for displaying in one list, but the
>> other list (which holds the actual rows of the selected table) is
>> using the relevant subclass of base.
>> 
>> I use wx.ListCtrl to display everything, and when I load the rows into
>> their list, I first grab the column names from the table and set the
>> list to have those columns. Then, I want to itterate over each row,
>> extracting the value for the given column and displaying it. That's
>> the problem: these objects are not itterable. I tried to use
>> value(columnName), but that didn't work, even though the rows are
>> returned from a query. Here's my attempt thus far.
>> 
>> def updateSelectedIndex(self, evt):
>> """When the index changes in the list of tables, the list of records
>> is automatically populated with the newly-selected table's records."""
>> super(DBTablesListManager, self).updateSelectedIndex(evt)
>> self.records_listControl.ClearAll() #to avoid appending items--we
>> want each table to display only its own data
>> table = self.choices[self.selectedIndex][0] #this is the
>> sql.schema.Table object
>> self.records =
>> DBInterface.session.query(self.schemas[self.selectedIndex]).all()
>> #self.schema is the list of actual subclasses of base, not table
>> objects
>> #set the column names in the records list
>> i = 0
>> for column in table.columns:
>>  self.records_listControl.InsertColumn(i, column.name)
>>  i += 1
>> #add the data
>> i = 0
>> for record in self.records:
>>  recordData = record.value(table.columns[i]).__str__() #this line errors
>>  self.records_listControl.InsertStringItem(i, recordData)
>>  for j in range(1, len(record)):
>>   self.records_listControl.SetStringItem(i, j, record[j].__str__())
>> #this line would fail too
>>  i += 1
>> 
>> I'm either missing something obvious, or thinking about this all
>> wrong, because itterating over a row has to be a pretty common need.
>> Thanks in advance for any suggestions anyone has.
> 
> “record” is an instance of your mapped class. It doesn’t have any public 
> methods other than those which you define yourself (so for instance it 
> doesn’t have a “value()” method or an __iter__ method unless you define 
> them). If you want to get a named attribute from it, use the standard python 
> getattr function:
> 
>  recordData = getattr(record, colname)
> 
> Hope that helps,
> 
> Simon
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.



--
Alex Hall
Automatic Distributors, IT Department
942-6769, Ext. 629
ah...@autodist.com

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modifying records across multiple modules?

2016-02-04 Thread Alex Hall
Oh! Okay, I think I get you now. The only reason I pass a list of
table objects is that I need the name, and I thought I needed the
table to query. If I can obtain the table name from the class
(.__tablename__), then this should work, with no need to mess with
table objects at all. I'll try it. Thanks for the help!

On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
> You need to use the class when querying the database. SQLAlchemy will then
> return an instance of that class for each matching row:
>
>   customer = session.query(Customer).filter_by(name=u'Powersports
> Etc').first()
>   customer.name = u'New Name'
>   session.flush()
>
> The getAllClasses function I showed was meant to be a direct analog to your
> getAllTables function, which was returning the tables themselves. You were
> then passing those to session.query() to get a whole load of rows back, but
> those rows were not update-able. I was trying to suggest that if you
> replaced those Table objects with mapped classes (such as Customer), then
> when those classes were passed to session.query(), what you get back will
> be Customer *instances*.
>
> session.query(someTable) returns glorified tuples that can't be updated
> session.query(someMappedClass) returns instances of someMappedClass, that
> *can* be updated.
>
> Hope that makes sense,
>
> Simon
>
> On Thu, Feb 4, 2016 at 2:07 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> This is where sqlalchemy gets murky for me. If I return all classes,
>> I'm not returning all *instances* of those classes. How, then, would I
>> update a given customer's record? The objects (c1 and c2, for
>> instance) were instantiated elsewhere, and my GUIManager module has no
>> knowledge of them. More generally, when I switch this over to our
>> AS400 and start querying, I won't have created any records at all.
>> Every record will be pulled from a table on the 400, and while I will
>> have a schema, I won't have any instances of that schema.
>>
>> As I think about this, it occurs to me that I should *create* the
>> record objects from the records. That is:
>>
>> for record in recordsList: #an array of session.query(someTable).all()
>>  myTempRecord = mySchemaClass(record)
>>  myTempRecord.property = newValue
>> #update the database
>>
>> Assuming GUIManager knows about mySchemaClass, would that approach
>> work? How I'd pass in a record and get back an instance of
>> mySchemaClass I'm not yet sure, but is this worth looking into more,
>> or am I on the wrong track?
>>
>> On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
>> > getAllTables is returning Core Table objects. I suppose you could have
>> > a
>> > similar function which returns all mapped classes, something like this:
>> >
>> > def getAllClasses():
>> > return base.__subclasses__()
>> >
>> > (If your class hierarchy is more complicated you'd need a more
>> > sophisticated function there)
>> >
>> > Simon
>> >
>> > On Thu, Feb 4, 2016 at 12:32 PM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> Yes, I'm using Declarative. I was following a great tutorial, then
>> >> realized it was way out of date. The one recommended on the forum I
>> >> found said to use Declarative, so I did.
>> >>
>> >> In DBInterface, I have this little function:
>> >>
>> >> def getAllTables():
>> >>  return base.metadata.tables
>> >> #end def getAllTables
>> >>
>> >> I then loop over that array, grabbing the value only and ignoring the
>> >> key. I just tried printing the type, as in:
>> >>
>> >> def getAllTables():
>> >>  for table in base.metadata.tables.values():
>> >>   print type(table)
>> >>  return base.metadata.tables
>> >> #end def getAllTables
>> >>
>> >> I got  Here's the loop that
>> >> generates the table list passed to my GUI (remember that this must be
>> >> an array of arrays):
>> >>
>> >>   tables = DBInterface.getAllTables()
>> >>   tablesList = []
>> >>   for table in tables.values():
>> >>tablesList.append([table])
>> >>   #end for
>> >>
>> >> Just to be sure, I stuck a "print type(table)" statement in that for
>> >> loop, and the objects are still sqlalchemy.sql.schema.Table objects.
>> >>
>> >> On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
>> >

Re: [sqlalchemy] Modifying records across multiple modules?

2016-02-04 Thread Alex Hall
I was re-reading your email, and realized I forgot to answer something
(sorry, it's early here). It looks like I am indeed passing the table
to session.query, which works, but then I'm trying to change
attributes of the table, which doesn't (of course). Could I do this?

#DBDefinitions.py
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
base = declarative_base()

class Customer(base):
 ..

class MyOtherTable(base):
 ..

#DDInterface.py
import DBDefinitions
import sqlalchemy

#set up the database--session, engine, etc

DBDefinitions.base.metadata.create_all(myEngine) #should create the
empty tables, right?

#main.py
from DBDefinitions import *
import DBInterface

c1 = Customer(...)
c2 = Customer(...)
mt1 = MyOtherTable(...)
if DBInterface.session.query(DBDefinitions.Customer).count == 0:
DBInterface.session.add_all([c1, c2])


On 2/4/16, Alex Hall <ah...@autodist.com> wrote:
> Yes, I'm using Declarative. I was following a great tutorial, then
> realized it was way out of date. The one recommended on the forum I
> found said to use Declarative, so I did.
>
> In DBInterface, I have this little function:
>
> def getAllTables():
>  return base.metadata.tables
> #end def getAllTables
>
> I then loop over that array, grabbing the value only and ignoring the
> key. I just tried printing the type, as in:
>
> def getAllTables():
>  for table in base.metadata.tables.values():
>   print type(table)
>  return base.metadata.tables
> #end def getAllTables
>
> I got  Here's the loop that
> generates the table list passed to my GUI (remember that this must be
> an array of arrays):
>
>   tables = DBInterface.getAllTables()
>   tablesList = []
>   for table in tables.values():
>tablesList.append([table])
>   #end for
>
> Just to be sure, I stuck a "print type(table)" statement in that for
> loop, and the objects are still sqlalchemy.sql.schema.Table objects.
>
> On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
>> SQLAlchemy has 2 main layers, the "Core" layer which deals with mostly
>> database-level constructs, such as Tables, and the ORM layer, which is
>> built on top of Core. With the ORM, you map your own classes onto the
>> lower-level Tables, then work with instances of those classes. You appear
>> to be using the declarative system to define your classes, so SQLAlchemy
>> will be creating the associated Table instances for you under the hood.
>>
>> In your example, Customer is an ORM-level class. Somewhere, there will be
>> a
>> construct representing the Core-level Table that the Customer class is
>> mapped to. (It's probably available at Customer.__table__, but also in
>> other places).
>>
>> When you say that "self.choices" contains table objects, I suspect that
>> those are the Core-level Table instances. When you query using those, you
>> don't get Customer objects back. You need to query using the Customer
>> class
>> instead. Can you get the Customer class into your self.choices array,
>> either instead of the table, or as well as? Where are you getting the
>> object that you are putting in the first element of each of the
>> self.choices list?
>>
>> Simon
>>
>> On Thu, Feb 4, 2016 at 11:34 AM, Alex Hall <ah...@autodist.com> wrote:
>>
>>> It's all from a GUI, so it's something like this (my code isn't in front
>>> of me):
>>> DBInterface.session"query(self.choices[self.selectedIndex][0]).all()
>>> Choices is a 2D array where the first (0th) element of each sub-array is
>>> a
>>> table object. The query works, because I get the records as expected and
>>> can display them or inspect them. I just can't modify them for some
>>> reason.
>>> As I said, though, I'm new to this package so am likely missing an
>>> obvious
>>> step, or have something set up very wrong.
>>>
>>> Sent from my iPhone
>>>
>>> > On Feb 3, 2016, at 16:18, Simon King <si...@simonking.org.uk> wrote:
>>> >
>>> > OK, so you’re not actually getting Customer objects back from your
>>> query. What does your call to session.query() look like? For this to
>>> work,
>>> it really ought to be something like “session.query(Customer)”. I
>>> suspect
>>> you are doing something like “session.query(Customer.id, Customer.name,
>>> …)”
>>> instead.
>>> >
>>> > Simon
>>> >
>>> >> On 3 Feb 2016, at 17:43, Alex Hall <ah...@autodist.com> wrote:
>>> >>
>>> >> I'm on the Gmail site, so am not sure I can 

Re: [sqlalchemy] Modifying records across multiple modules?

2016-02-04 Thread Alex Hall
This is where sqlalchemy gets murky for me. If I return all classes,
I'm not returning all *instances* of those classes. How, then, would I
update a given customer's record? The objects (c1 and c2, for
instance) were instantiated elsewhere, and my GUIManager module has no
knowledge of them. More generally, when I switch this over to our
AS400 and start querying, I won't have created any records at all.
Every record will be pulled from a table on the 400, and while I will
have a schema, I won't have any instances of that schema.

As I think about this, it occurs to me that I should *create* the
record objects from the records. That is:

for record in recordsList: #an array of session.query(someTable).all()
 myTempRecord = mySchemaClass(record)
 myTempRecord.property = newValue
#update the database

Assuming GUIManager knows about mySchemaClass, would that approach
work? How I'd pass in a record and get back an instance of
mySchemaClass I'm not yet sure, but is this worth looking into more,
or am I on the wrong track?

On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
> getAllTables is returning Core Table objects. I suppose you could have a
> similar function which returns all mapped classes, something like this:
>
> def getAllClasses():
> return base.__subclasses__()
>
> (If your class hierarchy is more complicated you'd need a more
> sophisticated function there)
>
> Simon
>
> On Thu, Feb 4, 2016 at 12:32 PM, Alex Hall <ah...@autodist.com> wrote:
>
>> Yes, I'm using Declarative. I was following a great tutorial, then
>> realized it was way out of date. The one recommended on the forum I
>> found said to use Declarative, so I did.
>>
>> In DBInterface, I have this little function:
>>
>> def getAllTables():
>>  return base.metadata.tables
>> #end def getAllTables
>>
>> I then loop over that array, grabbing the value only and ignoring the
>> key. I just tried printing the type, as in:
>>
>> def getAllTables():
>>  for table in base.metadata.tables.values():
>>   print type(table)
>>  return base.metadata.tables
>> #end def getAllTables
>>
>> I got  Here's the loop that
>> generates the table list passed to my GUI (remember that this must be
>> an array of arrays):
>>
>>   tables = DBInterface.getAllTables()
>>   tablesList = []
>>   for table in tables.values():
>>tablesList.append([table])
>>   #end for
>>
>> Just to be sure, I stuck a "print type(table)" statement in that for
>> loop, and the objects are still sqlalchemy.sql.schema.Table objects.
>>
>> On 2/4/16, Simon King <si...@simonking.org.uk> wrote:
>> > SQLAlchemy has 2 main layers, the "Core" layer which deals with mostly
>> > database-level constructs, such as Tables, and the ORM layer, which is
>> > built on top of Core. With the ORM, you map your own classes onto the
>> > lower-level Tables, then work with instances of those classes. You
>> > appear
>> > to be using the declarative system to define your classes, so
>> > SQLAlchemy
>> > will be creating the associated Table instances for you under the hood.
>> >
>> > In your example, Customer is an ORM-level class. Somewhere, there will
>> be a
>> > construct representing the Core-level Table that the Customer class is
>> > mapped to. (It's probably available at Customer.__table__, but also in
>> > other places).
>> >
>> > When you say that "self.choices" contains table objects, I suspect that
>> > those are the Core-level Table instances. When you query using those,
>> > you
>> > don't get Customer objects back. You need to query using the Customer
>> class
>> > instead. Can you get the Customer class into your self.choices array,
>> > either instead of the table, or as well as? Where are you getting the
>> > object that you are putting in the first element of each of the
>> > self.choices list?
>> >
>> > Simon
>> >
>> > On Thu, Feb 4, 2016 at 11:34 AM, Alex Hall <ah...@autodist.com> wrote:
>> >
>> >> It's all from a GUI, so it's something like this (my code isn't in
>> >> front
>> >> of me):
>> >> DBInterface.session"query(self.choices[self.selectedIndex][0]).all()
>> >> Choices is a 2D array where the first (0th) element of each sub-array
>> >> is
>> >> a
>> >> table object. The query works, because I get the records as expected
>> >> and
>> >> can display them or inspect them. I just can't modify them for some
>>

[sqlalchemy] Itterating over database row?

2016-02-04 Thread Alex Hall
Hello all,
I'm setting my application up the way Simon suggested. I still use the
table object so I can get its name for displaying in one list, but the
other list (which holds the actual rows of the selected table) is
using the relevant subclass of base.

I use wx.ListCtrl to display everything, and when I load the rows into
their list, I first grab the column names from the table and set the
list to have those columns. Then, I want to itterate over each row,
extracting the value for the given column and displaying it. That's
the problem: these objects are not itterable. I tried to use
value(columnName), but that didn't work, even though the rows are
returned from a query. Here's my attempt thus far.

 def updateSelectedIndex(self, evt):
  """When the index changes in the list of tables, the list of records
is automatically populated with the newly-selected table's records."""
  super(DBTablesListManager, self).updateSelectedIndex(evt)
  self.records_listControl.ClearAll() #to avoid appending items--we
want each table to display only its own data
  table = self.choices[self.selectedIndex][0] #this is the
sql.schema.Table object
  self.records =
DBInterface.session.query(self.schemas[self.selectedIndex]).all()
#self.schema is the list of actual subclasses of base, not table
objects
  #set the column names in the records list
  i = 0
  for column in table.columns:
   self.records_listControl.InsertColumn(i, column.name)
   i += 1
  #add the data
  i = 0
  for record in self.records:
   recordData = record.value(table.columns[i]).__str__() #this line errors
   self.records_listControl.InsertStringItem(i, recordData)
   for j in range(1, len(record)):
self.records_listControl.SetStringItem(i, j, record[j].__str__())
#this line would fail too
   i += 1

I'm either missing something obvious, or thinking about this all
wrong, because itterating over a row has to be a pretty common need.
Thanks in advance for any suggestions anyone has.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.