[sqlalchemy] Re: The use of SQLAlchemy for a long term project

2015-04-18 Thread Jonathan Vanasco
As a heavy user, an occasional contributor, and the person who recently 
mined PyPi for all the historical SqlAlchemy data to generate the new 
release history matrix...

I don't think you have anything to really worry about for long term use.

The majority of updates over the past 7 years that have been released deal 
with:
• new functionality or improved performance
• very specific bug fixes (ie, edge cases or dialect/driver issues)

Although versions may hit their EOL within 2 years:
- there is rarely any reason to upgrade old projects
- mike is really nice, and often backports certain fixes to earlier 
branches that are technically out-of-support

I have legacy projects in production that are using .5,x .6.x and .7x.
- virtualenv makes pegging sqlalchemy versions a non-issue

In terms of the time needed to upgrade -- I once had to upgrade a .4.x 
project to the .9.x series.  it took 45 minutes to change some code, 
integrate find/replace on backwards incompatbiliites, and address test 
failures.  Those 45 minutes correlated to hours saved by new features.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] The use of SQLAlchemy for a long term project

2015-04-18 Thread Mike Bayer



On 4/17/15 6:58 PM, Van Klaveren, Brian N. wrote:

Hi,

I'm investigating the use and dependency on SQLAlchemy for a long-term 
astronomy project. Given Version 1.0 just came out, I've got a few questions 
about it.

1. It seems SQLAlchemy generally EOLs versions after about two releases/years. 
Is this an official policy? Is this to continue with version 1.0 as well? Or is 
it possible 1.0 might be a something of a long-term release.
2. While well documented and typically minimal, SQLAlchemy does have occasional 
API and behavioral changes to be aware of between versions. Is the 1.0 API more 
likely to be stable on the time frame of ~4 years?

Put another way, would you expect that it should be easier to migrate from 
version 1.0 to the 1.4 (or whatever the current version is) of SQLAlchemy in 
five years than it would be to migrate from 0.6 to 1.0 today.

I know these questions are often hard to answer with any certainty, but these 
sorts of projects typically outlive the software they are built on and are 
often underfunded as far as software maintenance goes, so we try to plan 
accordingly.

(Of course, some people just give up and through everything in VMs behind 
firewalls)
Well the vast majority of bugs that are fixed, like 99% of them, impact 
only new development, that is, they only have a positive impact someone 
who is writing new code, using new features of their database backend, 
or otherwise attempting to do something new; they typically only serve 
to raise risk and decrease stability of code that is not under active 
development and is stabilized on older versions of software.


These kinds of issues mean that some way of structuring tables, mapped 
classes, core SQL or DDL objects, ORM queries, or calls to a Session 
produce some unexpected result, but virtually always, this unexpected 
result is consistent and predictable.   An application that is sitting 
on 0.5 or 0.6 and is running perfectly fine, because it hasn't hit any 
of these issues, or quite often because it has and is working around 
them (or even relying upon their behavior) would not benefit at all from 
these kinds of fixes being backported, but would instead have a greater 
chance of hitting a regression or a change in assumptions if lots of 
bugfixes were being backported from two or three major versions forward.


So it's not like we don't backport issues three or four years back 
because it's too much trouble, it's because these backports wouldn't 
benefit anyone and they would only serve to wreak havoc with old and 
less maintained applications when some small new feature or improvement 
in behavioral consistency breaks some assumption made by that application.


As far as issues that are more appropriate for backporting, which would 
be security fixes and stability enhancements, we almost never have 
issues like that; the issues we have regarding stability, like memory 
leaks and race conditions, again typically occur in conjunction with a 
user application doing something strange and unexpected (e.g. new 
development), and as far as security issues the only issue we ever had 
like that even resembled a security issue was issue 2116 involving 
limit/offset integers not being escaped, which was backported from 0.7 
to 0.6.  Users who actually needed enterprise-level longevity who 
happened to be using for example the Red Hat package could see the 
backport for this issue backported all the way to their 0.5 and 0.3 
packages.  But presence of security/memory leak/stability issues in 
modern versions is extremely rare, and we generally only see new issues 
involving memory or stability as a result of new features (e.g. 
regressions).


There's also the class of issues that involve performance 
enhancements.   Some of these features would arguably be appropriate to 
backport more than several major versions, but again they are often the 
result of significant internal refactorings and definitely would raise 
risk for an older application not undergoing active development.   An 
older application that wants to take advantage of newer performance 
features would be better off going through the upgrade process than 
risking running on top of a library that is a hybrid of very old code 
and backported newer approaches, which will see a lot less real-world 
testing.


So short answer, the EOL you see of those old versions is generally a 
good thing as those old versions are running in old applications that 
aren't seeing lots of new development and would see a mostly negative 
effect and little to no benefit from the code continuing to change.   
SQLAlchemy is a development library so generally an application that's 
been put into production against a certain version has been well tested 
and tuned against the behaviors of that specific version.


As far as API and behavioral changes, as far as API we are really 
conservative about actually changing APIs such that an older approach 
won't work anymore.   That happened a 

Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-18 Thread Mike Bayer



On 4/18/15 7:13 PM, Michael Wilson wrote:

I have the following tables:

things_table = Table(’thing', self.metadata,
Column('id', Integer, primary_key=True),
…
)

comments_table = Table('comments', self.metadata,
Column('id', Integer, primary_key=True),  # Unique id for this comment
Column('type', Integer),  # Type of comment 
(feedback, etc)

…
)

(And the corresponding mapping).

I’m trying to construct a query like this:

clauseList  = []
clauseList.append(Look.creation = start_date_rounded)
clauseList.append(Look.creation = end_date)
clauseList.append(Look.like_count  0)

clauseList.append(Comment.creation = start_date_rounded)
clauseList.append(Comment.creation = end_date)
clauseList.append(Comment.type == CommentTypeLike)
clauseList.append(Comment.target_id == Look.id)
condition = and_(*clauseList)

looks = session.query(Look, Comment,
func.count(Comment.type)).\
group_by(Look.id).\
order_by(func.count(Comment.type).desc()).\
filter(condition).\
offset(0).\
limit(count).\
all()

This fails with :
FROM comments, things WHERE comments.target_id = things.id AND 
comments.type = :type_1' returned no FROM clauses due to 
auto-correlation; specify correlate(tables) to control correlation 
manually.
The “comments_table” and “things_table” declaration aren’t visible to 
the function generating the query, but even if I make them visible, 
and specify :

correlate(things, comments).\
It still fails.
How can I make this work?


by work we'd need to know what SQL you are going for.   The 
query(Look, Comment, func.count(Comment.type)) seems very odd because if 
you are using aggregates in your query, SQL dictates (unless you're 
using MySQL's cheater mode) that all the other columns that aren't 
aggregates need to be in the GROUP BY.   Also I don't see any subqueries 
here so nothing that would refer to correlation or produce that message, 
don't see what CommentTypeLike is, etc.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Generating Correlated Subqueries

2015-04-18 Thread Michael Wilson
I have the following tables:

things_table = Table(’thing', self.metadata,
Column('id', Integer, primary_key=True),
…
)

comments_table = Table('comments', self.metadata,
Column('id', Integer, primary_key=True),  # Unique id for this 
comment
Column('type', Integer),  # Type of comment 
(feedback, etc)
…
)

(And the corresponding mapping).

I’m trying to construct a query like this:

clauseList  = []
clauseList.append(Look.creation = start_date_rounded)
clauseList.append(Look.creation = end_date)
clauseList.append(Look.like_count  0)

clauseList.append(Comment.creation = start_date_rounded)
clauseList.append(Comment.creation = end_date)
clauseList.append(Comment.type == CommentTypeLike)
clauseList.append(Comment.target_id == Look.id)
condition = and_(*clauseList)

looks = session.query(Look, Comment, 
  func.count(Comment.type)).\
group_by(Look.id).\
order_by(func.count(Comment.type).desc()).\
filter(condition).\
offset(0).\
limit(count).\
all()

This fails with :
FROM comments, things 
WHERE comments.target_id = things.id AND comments.type = :type_1' returned no 
FROM clauses due to auto-correlation; specify correlate(tables) to control 
correlation manually.
The “comments_table” and “things_table” declaration aren’t visible to the 
function generating the query, but even if I make them visible, and specify :
correlate(things, comments).\
It still fails. 
How can I make this work?

Thanks!




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.