Re: [sqlalchemy] Re: testing patterns with sqlalchemy 2.0

2022-09-02 Thread Elmer de Looff
On Fri, Sep 2, 2022, 08:47 Chris Withers  wrote:

> On 01/09/2022 20:00, Jonathan Vanasco wrote:
> >
> >  > Create an empty schema from the models using create_all?
> >
> > This is what I usually do with smaller projects.
>
> When taking this approach, how do you ensure the accumulated schema
> migrations end up with a database that matches the one that create_all
> gives you?
>

You might consider adding a step to your CI pipeline where you
auto-generate a new revision and ensure it is empty. Some of the projects
at work use this as a safety valve.

Flask's Alembic integration makes this particularly nice, where the
migration command has an observable different exit code when no migrations
are necessary. You might want to look at how to detect for this case and
create a small check script, or even an explicit test in the suite.

>
> > When using unittest, some tests will use a fresh DB per test-run, others
> > per-class, and others per-test.
>
> Yeah, pytest fixture scoping gives a great way to set these up.
>
> > Sometimes the tests dictate that, other
> > times I control that with env vars.  That gives us the flexibility to
> > work on a small section and do efficient test runs during development.
>
> Creating a database for every unit test feels like something that would
> be slow enough to be annoying. How are you creating databases such that
> it's fast enough for this not to be the case?
>
> Chris
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/d779d7ad-1eff-e4e8-1f9c-9979b80a9146%40withers.org
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAA7D1eGNYQT-CBLCTSXMac5_FTw8DwG6c%3DmugdYfpQRJcQJFGw%40mail.gmail.com.


Re: [sqlalchemy] Injecting User info into _history table to track who performed the change

2021-03-12 Thread Elmer de Looff
Hi JP,

Depending on how you've implemented your history tracking, that routine is
quite far removed from your web framework and getting a neat, clean way of
dealing with that might not be within reach.

However, most web frameworks have some concept of a threadlocal request (or
function to retrieve it), which you could invoke and if such a request
exists, you could use that to load whatever user identity you have
available on there (again, the details differ, but this tends to be a
shared feature). From there you can store the user either as a foreign key,
or a unique identifier like email. Which one you pick would depend on how
you want the history to be affected when you delete a user record for
example.



On Fri, Mar 12, 2021 at 11:58 PM JPLaverdure  wrote:

> Hello everyone,
>
> We already have the ability to timestamp the creation of the history row,
> but it would also be interesting to be able to track the user responsible
> for the content update.
> I would like to get suggestions on the best way to achieve this.
>
> I realize this is somewhat outside the scope of sqlalchemy as the notion
> of a "logged in user" is more closely related to the context of the
> app/webapp using SQLAlchemy as its ORM but maybe other people would benefit
> from having a way to inject arbitrary data in the history table.
>
> Ideally, I would like the insert in the _history table to be atomic, so I
> feel like hooking an update statement to an event might not be the way to
> go.
> I'm tempted to modify the signature of before_flush but I'm not sure where
> it gets called.
>
> Any help is welcome !
> Thanks
>
> JP
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/82a24998-14e1-4ff4-a725-dd25c20a8bf2n%40googlegroups.com
> 
> .
>


-- 

Elmer

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAA7D1eHrc7YYH2soeAAYRGea1twc2c%3Dx6yxuZ6D4rRmAFeTd1g%40mail.gmail.com.


Re: [sqlalchemy] AWS RDS generate-db-auth-token and Redshift get-cluster-credentials

2020-05-22 Thread Elmer de Looff
For reference, we've used this engine strategy for a while, which seems to
get the job done. We're strictly on Postgres so the code could do with some
alterations to make it compatible with multiple backends, that's left as an
exercise to the reader :-)

The main work is done in _rds_engine_creator() which gets the
necessary (short-lived) credentials for the connection just before it's
actually created. There's a couple of ways to do this, this is simply one
that got us a nice hands-off result where all we needed was to provide a
different engine strategy in the config. Adjust for your particular use
case.

# Register this engine strategy somewhere in your imported models
class RdsEngineStrategy(PlainEngineStrategy):
name = 'rds'

def create(self, name_or_url, **kwargs):
"""Adds an RDS-specific 'creator' for the engine connection."""
engine_url = make_url(name_or_url)
kwargs['creator'] = self._rds_engine_creator(engine_url)
return super().create(engine_url, **kwargs)

def _rds_engine_creator(self, engine_url):
instance_id, region = engine_url.host.split('.')
connector = engine_url.get_dialect().dbapi().connect
rds = boto3.client('rds', region_name=region)
if self._rds_first_instance_by_name(rds, instance_id) is None:
raise ValueError('No RDS instances for the given instance ID')

def engine_func():
instance = self._rds_first_instance_by_name(rds, instance_id)
password = rds.generate_db_auth_token(
DBHostname=instance['Endpoint']['Address'],
DBUsername=engine_url.username,
Port=instance['Endpoint']['Port'])
return connector(
host=instance['Endpoint']['Address'],
port=instance['Endpoint']['Port'],
database=engine_url.database,
user=engine_url.username,
password=password,
sslmode='require')
return engine_func

def _rds_first_instance_by_name(self, client, name):
response = client.describe_db_instances(DBInstanceIdentifier=name)
return next(iter(response['DBInstances']), None)


# Make sure to actually register it
RdsEngineStrategy()

# Caller code
engine = sqlalchemy.create_engine("postgres://user@instance-name.region
/dbname", strategy="rds")


On Fri, May 22, 2020 at 9:54 PM Mike Bayer  wrote:

> You can modify how the engine makes connections using the do_connect event
> hook:
>
>
> https://docs.sqlalchemy.org/en/13/core/events.html?highlight=do_connect#sqlalchemy.events.DialectEvents.do_connect
>
> each time the engine/ pool go to make a new connection, you can affect all
> the arguments here, or return an actual DBAPI connection.
>
>
>
>
> On Fri, May 22, 2020, at 1:39 PM, Ryan Kelly wrote:
>
> Hi,
>
> I am looking to use credentials provided by the above functionality from
> AWS. Basically, using either of these methods, you can obtain temporary
> credentials (for RDS, just password, and Redshift both username and
> password) that can be used to access the database. However, for long
> running processes, connection failures and subsequent reconnections as well
> as new connections initiated by the connection pool (or even just waiting a
> long time between generating the credentials and making your first
> connection) the credentials configured on a URL as passed to create_engine
> will eventually begin to fail.
>
> At first I thought I'd simply subclass URL and make username/password
> properties that could be refreshed as needed, but digging into
> create_connection it seems like those properties are read out of the URL
> object and into cargs/cwargs and provided to pool as such.
>
> I took then a roundabout approach or creating a proxy object that is
> capable of refreshing the value and using this object as the
> username/password, which only works because psycogp2 is helpfully calling
> str() on them as it constructs the connstring/dsn. Which... I mean, is an
> interesting, but also unsustainable, solution.
>
> What I am asking, I suppose, is 1) am I missing something obvious that
> would make this achievable? and 2) if not, what kind of best-approach pull
> request could I produce that could make this happen?
>
> Thanks,
> -Ryan
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAHUie25g0G5OPuyDHaNn8oWkTzizwQxGY0tnkaJvOewLMQR4DQ%40mail.gmail.com
> 
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, 

Re: [sqlalchemy] composite secondary join variant

2020-04-24 Thread Elmer de Looff
Hi Jonathan,

>From toying with it a little bit, it looks like you *need* to specify a
secondaryjoin when you specify the secondary table. In your example, the
secondary does some of the work that the secondaryjoin would need to do.
I've created a gist that mirrors your table setup (with some more elaborate
naming) that hopefully provides you with what you're looking for.

https://gist.github.com/edelooff/c1bb7f7912e49d01677fcce0003663fe

On Fri, Apr 24, 2020 at 2:01 AM Jonathan Vanasco  wrote:

> i'm stuck on a variant of the Composite Secondary Join (
> https://docs.sqlalchemy.org/en/13/orm/join_conditions.html#composite-secondary-joins
> )
>
> I hope someone can see what my tired eyes are missing. I'm fairly certain
> the issue is in `secondary' and 'secondaryjoin'.  I've tried a handful of
> variants I thought would work, but trigger this error:
>
> sqlalchemy.exc.InvalidRequestError: Class  does not
> have a mapped column named 'get_children'
>
> I've used this pattern with much luck in the past: `primaryjoin` goes from
> the base class to whatever I build the secondary/secondary join from.
>
> I've distilled the relationship pattern as below:
>
> * `A` does not fkey onto anything.
> * `B` fkeys onto `A`
> * The intended relationship on `A` is a list of `C` items, which are
> related to `B` through an association table
>
> I can build out a the relationship from B, and I could probably mount it
> onto A with an association_proxy, but I'd like to understand what I'm doing
> wrong with the `relationship` off A.  This is puzzling me.
>
> Thanks in advace.
>
>
> import sqlalchemy
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy import Integer
> from sqlalchemy import Column
> from sqlalchemy import ForeignKey
> from sqlalchemy.orm import relationship
> from sqlalchemy import create_engine
> from sqlalchemy.orm import sessionmaker
>
>
> # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> - - -
> Base = declarative_base()
> # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> - - -
>
> class A(Base):
> __tablename__ = 'a'
> id = Column(Integer, primary_key=True)
>
>
> cs = relationship(
> "C",
> primaryjoin="A.id == B.a_id",
> secondary="join(B, B2C, B.id == B2C.b_id)."
>   "join(B2C, C, B2C.c_id == C.id)",
> # secondaryjoin="and_(C.id == B2C.c_id)",
> viewonly=True
> )
>
>
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
> a_id = Column(ForeignKey('a.id'))
>
>
> class B2C(Base):
> __tablename__ = 'b2c'
>
> id = Column(Integer, primary_key=True)
> b_id = Column(ForeignKey('b.id'))
> c_id = Column(ForeignKey('c.id'))
>
>
> class C(Base):
> __tablename__ = 'c'
> id = Column(Integer, primary_key=True)
>
>
> engine = create_engine("sqlite://", echo=True)
> Base.metadata.create_all(engine)
> sessionFactory = sessionmaker(bind=engine)
> sess = sessionFactory()
>
> sess.query(A).join(a.cs).all()
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/474ccb9b-6839-47b7-9d38-fd1a7065f7a4%40googlegroups.com
> 
> .
>


-- 

Elmer

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAA7D1eGmrtoGYwXO18z%2BbEfc%3DR2-_F30uXq%2BKu6ykLjAyDGviA%40mail.gmail.com.


Re: [sqlalchemy] Large RAM usage in bulk_insert_mappings

2019-11-15 Thread Elmer de Looff
I'm not even sure the problem is with the batch insert function itself,
creating half a million dicts in Python is going to cause you a bit of a
bad time. That said, I've run into the same problem with a little toy
project, which works around this with a 'bulk save' interface. With a
minimal change you get to create the object/mapping to insert at the time
you need it, push it into the bulk saver, and it will flush every time it
hits a threshold. The actual threshold to use will depend a bit on your
database and driver, and I imagine connection latency plays an important
part as well.

Anyway, use what you can:
https://github.com/edelooff/smallville/blob/master/scripts/seed.py#L27

On Thu, Nov 14, 2019 at 8:42 PM James Fennell 
wrote:

> Because the memory spike was so bad (the application usually runs at 250mb
> RAM, and it went up to a GB during this process), I was able to find the
> problem by running htop and using print statements to discover where in the
> execution the Python code was when the RAM spike happened.
>
> I unfortunately don't have any advice on actual good tools for tracking
> RAM usage in Python programs but would to leave to hear if others do.
>
>
>
> On Thu, Nov 14, 2019 at 12:41 PM Soumaya Mauthoor <
> soumaya.mauth...@gmail.com> wrote:
>
>> What did you use to profile memory usage? I've recently been
>> investigating memory usage when loading data using memory_profiler and
>> would be interested to find out about the best approach
>>
>> On Thu, 14 Nov 2019, 17:16 James Fennell, 
>> wrote:
>>
>>> Hi all,
>>>
>>> Just sharing some perf insights into the bulk operation function
>>> bulk_insert_mappings.
>>>
>>> I was recently debugging a SQL Alchemy powered web app that was crashing
>>> due to out of memory issues on a small Kubernetes node. It turned out to be
>>> "caused" by an over optimistic invocation of bulk_insert_mappings.
>>> Basically I'm reading a CSV file with ~500,000 entries into a list of
>>> dictionaries, and then passing it into the bulk_insert_mappings function at
>>> once. It turned out the SQL Alchemy work was using 750mb of RAM, which was
>>> enough to OOM the small node the web app was running on.
>>>
>>> A simple workaround is to split the list of 500,000 entries into chunks
>>> of 1000 entries each, and then call bulk_insert_mappings on each chunk.
>>> When I do this, the extra memory usage is not even noticeable. But also, it
>>> seems that this chunked approach is actually faster! I might benchmark that
>>> to quantify that.
>>>
>>> Thought it was interesting. I wonder would it be worth adding to the
>>> docs on bulk_insert_mappings? Given that function is motivated by
>>> performance, it seems it might be relevant.
>>>
>>> James
>>>
>>>
>>>
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/sqlalchemy/CALDF6i1sx_QmW1v1jvopT-iWZqmSGmJ7JpJi10egNnbE0D7FMQ%40mail.gmail.com
>>> 
>>> .
>>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/CAN14jWTdFQ7tSyhr71H_wCX_JXT58S40Q4MN7qte6pE2N-%2BOLw%40mail.gmail.com
>> 
>> .
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> 

Re: [sqlalchemy] Postgres array containment query generates FROM-clause that causes failure

2019-08-12 Thread Elmer de Looff
Hi Mike,

I've been playing around with the Postgresql dialect array() type a bit
more and have something that appears to work, at least for the small case
that I'm trying to solve for myself. I'd like to check whether I'm on the
right track with how I'm approaching this though so please find a patch
(against the current master) attached, as well as a small update to the
example script to use the array(selectable).contained_by(array(literal))
syntax I described in my previous message.

One thing that surprised me when doing this against the master branch was
an error that Subquery no longer has an .as_scalar() method; but then
removing as_scalar() altogether from that query yielded a correct query, so
something (excellent) has changed there in 1.4

On Fri, Aug 9, 2019 at 11:29 AM Elmer de Looff 
wrote:

> I've been milling over your comment regarding the func.array vs array
> approaches, and that makes a lot of sense. Currently any array(iterable)
> statement gets turned into an ARRAY[] literal in Postgres. Extending this
> to emit an ARRAY() constructor when array(query) is called seems like a
> reasonable approach. Turning this expression into a scalar one where
> necessary could be something that is done as part of that process. Going by
> Postgres documentation, this query should return a single column, but it
> seems reasonable to have the user be responsible for honoring that
> limitation and not SQLAlchemy.
>
> Having array(expression) also result in working contains/contained_by
> methods would be a nice bonus, as these don't work for the func.array()
> approach.
>
> I took a brief look at the code for array, but I'm not really sure how the
> whole flow from Python statement to emitted SQL goes. It does seems fairly
> wedded to the concept of being a literal array (looking at the superclass)
> for now, so my suggested extension might be a bit easier said than
> implemented.
>
> On Thu, Aug 8, 2019 at 7:05 PM Mike Bayer 
> wrote:
>
>>
>>
>> On Thu, Aug 8, 2019, at 11:56 AM, Elmer de Looff wrote:
>>
>> Thanks for the quick response and solution!
>>
>> Having the answer, it seems obvious enough, but getting to the solution
>> from the problem was hard, despite excellent documentation, Not sure it
>> deserves an FAQ entry as it might be a bit of an infrequent problem, but
>> maybe that'll help the next person. I'd be happy to take a stab at an entry
>> if you're interested.
>>
>>
>> PostgreSQL's special syntaxes are a much bigger issue than just this.
>> Many syntaxes are not yet supported and require recipes as listed at
>> https://github.com/sqlalchemy/sqlalchemy/issues/3566 .  I would
>> think the Postgresql dialect documentation needs an entire section
>> dedicated to special PG patterns, preferably in tandem with the structures
>> in issue 3566 being implemented as features.
>>
>> also, the approach I just gave you with as_scalar() might not be
>> generalizable as of yet , in that it's not clear what the approach is for
>> multidimensional arrays - as_scalar() really means a SELECT that returns a
>> single column / single row, and already we are somewhat repurposing it here
>> to represent a SELECT that represents multiple rows.This usage should
>> be clarified.
>>
>> Additionally, the use of func.array() works in this case but ideally
>> you'd be using the sqlalchemy.dialects.postgresql.array() construct since
>> this is not as much a SQL function as it is a first class datastructure in
>> Postgresql.  Multidimensional support was added to this construct in
>> https://github.com/sqlalchemy/sqlalchemy/issues/4756 but I don't think
>> it as of yet supports arbitrary SQL expressions, which it should.
>>
>> Basically, this whole area of SQLAlchemy is raw and under-developed.   If
>> we document approaches that just happen to work right now, but aren't
>> tested or supported or part of an overall strategy, then we are creating
>> poor assumptions. So I'd prefer we build out real patterns and have
>> them tested before we document them.
>>
>>
>>
>>
>>
>> On Thu, Aug 8, 2019 at 5:17 PM Mike Bayer 
>> wrote:
>>
>>
>>
>>
>> On Thu, Aug 8, 2019, at 9:48 AM, Elmer de Looff wrote:
>>
>> Hi,
>>
>> I'm trying to create a query to check whether a small number of given
>> keys are all present within a selection of a table. Postgres provides array
>> types/functions for this to check sub/superset properties, which seem to do
>> what I want. The query I'm trying to create is one of the following form:
>>
>>
>>
>> thanks for the clear test case, which allows me 

Re: [sqlalchemy] Postgres array containment query generates FROM-clause that causes failure

2019-08-09 Thread Elmer de Looff
I've been milling over your comment regarding the func.array vs array
approaches, and that makes a lot of sense. Currently any array(iterable)
statement gets turned into an ARRAY[] literal in Postgres. Extending this
to emit an ARRAY() constructor when array(query) is called seems like a
reasonable approach. Turning this expression into a scalar one where
necessary could be something that is done as part of that process. Going by
Postgres documentation, this query should return a single column, but it
seems reasonable to have the user be responsible for honoring that
limitation and not SQLAlchemy.

Having array(expression) also result in working contains/contained_by
methods would be a nice bonus, as these don't work for the func.array()
approach.

I took a brief look at the code for array, but I'm not really sure how the
whole flow from Python statement to emitted SQL goes. It does seems fairly
wedded to the concept of being a literal array (looking at the superclass)
for now, so my suggested extension might be a bit easier said than
implemented.

On Thu, Aug 8, 2019 at 7:05 PM Mike Bayer  wrote:

>
>
> On Thu, Aug 8, 2019, at 11:56 AM, Elmer de Looff wrote:
>
> Thanks for the quick response and solution!
>
> Having the answer, it seems obvious enough, but getting to the solution
> from the problem was hard, despite excellent documentation, Not sure it
> deserves an FAQ entry as it might be a bit of an infrequent problem, but
> maybe that'll help the next person. I'd be happy to take a stab at an entry
> if you're interested.
>
>
> PostgreSQL's special syntaxes are a much bigger issue than just this.
> Many syntaxes are not yet supported and require recipes as listed at
> https://github.com/sqlalchemy/sqlalchemy/issues/3566 .  I would think
> the Postgresql dialect documentation needs an entire section dedicated to
> special PG patterns, preferably in tandem with the structures in issue 3566
> being implemented as features.
>
> also, the approach I just gave you with as_scalar() might not be
> generalizable as of yet , in that it's not clear what the approach is for
> multidimensional arrays - as_scalar() really means a SELECT that returns a
> single column / single row, and already we are somewhat repurposing it here
> to represent a SELECT that represents multiple rows.This usage should
> be clarified.
>
> Additionally, the use of func.array() works in this case but ideally you'd
> be using the sqlalchemy.dialects.postgresql.array() construct since this is
> not as much a SQL function as it is a first class datastructure in
> Postgresql.  Multidimensional support was added to this construct in
> https://github.com/sqlalchemy/sqlalchemy/issues/4756 but I don't think it
> as of yet supports arbitrary SQL expressions, which it should.
>
> Basically, this whole area of SQLAlchemy is raw and under-developed.   If
> we document approaches that just happen to work right now, but aren't
> tested or supported or part of an overall strategy, then we are creating
> poor assumptions. So I'd prefer we build out real patterns and have
> them tested before we document them.
>
>
>
>
>
> On Thu, Aug 8, 2019 at 5:17 PM Mike Bayer 
> wrote:
>
>
>
>
> On Thu, Aug 8, 2019, at 9:48 AM, Elmer de Looff wrote:
>
> Hi,
>
> I'm trying to create a query to check whether a small number of given keys
> are all present within a selection of a table. Postgres provides array
> types/functions for this to check sub/superset properties, which seem to do
> what I want. The query I'm trying to create is one of the following form:
>
>
>
> thanks for the clear test case, which allows me to just make your code
> work.
>
> The ARRAY(select) is against a scalar list, so use as_scalar() so that the
> SELECT becomes a self-contained subquery:
>
> city_ids = sa.select([City.id]).where(City.size_code ==
> "M").as_scalar()
> check = sa.select(
> [array([3, 10, 18]).contained_by(sa.func.array(city_ids))]
> )
>
>
>
>
>
> SELECT :selection <@ ARRAY(
> SELECT id
> FROM city
> WHERE size_code = :size_code)
>
> Wrapping this in a text clause, adding parameters and executing it works
> without a hitch:
>
> raw_select = sa.text("""
> SELECT :selection <@ ARRAY(
> SELECT id
> FROM city
> WHERE size_code = :size_code)""")
> parameterized = raw_select.params(
> selection=[3, 10, 18],
> size_code='M')
> result = engine.execute(parameterized).scalar()
>
> However, I'd like to avoid having textual SQL in my codebase as it's more
> sensitive to changes in names and generally more error-prone. I'm
> struggling converting this to a working Core e

Re: [sqlalchemy] Postgres array containment query generates FROM-clause that causes failure

2019-08-08 Thread Elmer de Looff
Thanks for the quick response and solution!

Having the answer, it seems obvious enough, but getting to the solution
from the problem was hard, despite excellent documentation, Not sure it
deserves an FAQ entry as it might be a bit of an infrequent problem, but
maybe that'll help the next person. I'd be happy to take a stab at an entry
if you're interested.

On Thu, Aug 8, 2019 at 5:17 PM Mike Bayer  wrote:

>
>
> On Thu, Aug 8, 2019, at 9:48 AM, Elmer de Looff wrote:
>
> Hi,
>
> I'm trying to create a query to check whether a small number of given keys
> are all present within a selection of a table. Postgres provides array
> types/functions for this to check sub/superset properties, which seem to do
> what I want. The query I'm trying to create is one of the following form:
>
>
>
> thanks for the clear test case, which allows me to just make your code
> work.
>
> The ARRAY(select) is against a scalar list, so use as_scalar() so that the
> SELECT becomes a self-contained subquery:
>
> city_ids = sa.select([City.id]).where(City.size_code ==
> "M").as_scalar()
> check = sa.select(
> [array([3, 10, 18]).contained_by(sa.func.array(city_ids))]
> )
>
>
>
>
>
> SELECT :selection <@ ARRAY(
> SELECT id
> FROM city
> WHERE size_code = :size_code)
>
> Wrapping this in a text clause, adding parameters and executing it works
> without a hitch:
>
> raw_select = sa.text("""
> SELECT :selection <@ ARRAY(
> SELECT id
> FROM city
> WHERE size_code = :size_code)""")
> parameterized = raw_select.params(
> selection=[3, 10, 18],
> size_code='M')
> result = engine.execute(parameterized).scalar()
>
> However, I'd like to avoid having textual SQL in my codebase as it's more
> sensitive to changes in names and generally more error-prone. I'm
> struggling converting this to a working Core expression, a spurious
> FROM-clause keeps being generated:
>
> city_ids = sa.select([City.id]).where(City.size_code == 'M')
> check = sa.select([
> array([3, 10, 18]).contained_by(sa.func.array(city_ids))])
> engine.execute(check).scalar()
>
> This results in a Syntax Error being thrown by Postgres:
>
> sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) subquery in
> FROM must have an alias
> LINE 4: FROM (SELECT city.id AS id
>  ^
> HINT:  For example, FROM (SELECT ...) [AS] foo.
>
> [SQL: SELECT ARRAY[%(param_1)s, %(param_2)s, %(param_3)s] <@ array((SELECT
> city.id
> FROM city
> WHERE city.size_code = %(size_code_1)s)) AS anon_1
> FROM (SELECT city.id AS id
> FROM city
> WHERE city.size_code = %(size_code_1)s)]
> [parameters: {'param_1': 3, 'param_2': 10, 'param_3': 18, 'size_code_1':
> 'M'}]
> (Background on this error at: http://sqlalche.me/e/f405)
>
> The problem appears to be in the "_froms" list that is non-empty on the
> "check" query, but I can't seem to find a way of coercing SQLAlchemy into
> not generating that.
>
> I've attached a minimal script to reproduce the problem. The table is
> described though will have to be created still; it need not contain any
> data, the problem is one of SQL syntax alone.
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/8437d2b0-7189-4842-b029-e56056a9246a%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/8437d2b0-7189-4842-b029-e56056a9246a%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> *Attachments:*
>
>- sqla_select_contained_by.py
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> htt

[sqlalchemy] Postgres array containment query generates FROM-clause that causes failure

2019-08-08 Thread Elmer de Looff
Hi,

I'm trying to create a query to check whether a small number of given keys 
are all present within a selection of a table. Postgres provides array 
types/functions for this to check sub/superset properties, which seem to do 
what I want. The query I'm trying to create is one of the following form:

SELECT :selection <@ ARRAY(
SELECT id
FROM city
WHERE size_code = :size_code)

Wrapping this in a text clause, adding parameters and executing it works 
without a hitch:

raw_select = sa.text("""
SELECT :selection <@ ARRAY(
SELECT id
FROM city
WHERE size_code = :size_code)""")
parameterized = raw_select.params(
selection=[3, 10, 18],
size_code='M')
result = engine.execute(parameterized).scalar()

However, I'd like to avoid having textual SQL in my codebase as it's more 
sensitive to changes in names and generally more error-prone. I'm 
struggling converting this to a working Core expression, a spurious 
FROM-clause keeps being generated:

city_ids = sa.select([City.id]).where(City.size_code == 'M')
check = sa.select([
array([3, 10, 18]).contained_by(sa.func.array(city_ids))])
engine.execute(check).scalar()

This results in a Syntax Error being thrown by Postgres:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) subquery in 
FROM must have an alias
LINE 4: FROM (SELECT city.id AS id 
 ^
HINT:  For example, FROM (SELECT ...) [AS] foo.

[SQL: SELECT ARRAY[%(param_1)s, %(param_2)s, %(param_3)s] <@ array((SELECT 
city.id 
FROM city 
WHERE city.size_code = %(size_code_1)s)) AS anon_1 
FROM (SELECT city.id AS id 
FROM city 
WHERE city.size_code = %(size_code_1)s)]
[parameters: {'param_1': 3, 'param_2': 10, 'param_3': 18, 'size_code_1': 
'M'}]
(Background on this error at: http://sqlalche.me/e/f405)

The problem appears to be in the "_froms" list that is non-empty on the 
"check" query, but I can't seem to find a way of coercing SQLAlchemy into 
not generating that.

I've attached a minimal script to reproduce the problem. The table is 
described though will have to be created still; it need not contain any 
data, the problem is one of SQL syntax alone.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8437d2b0-7189-4842-b029-e56056a9246a%40googlegroups.com.
import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import array
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()

class City(Base):
__tablename__ = 'city'

id = sa.Column(sa.Integer, primary_key=True)
size_code = sa.Column(sa.Text)


def main():
engine = sa.create_engine('postgres://elmer@/smallville', echo=True)

# Text-based selection works as intended
raw_select = sa.text("""
SELECT :selection <@ ARRAY(
SELECT id
FROM city
WHERE size_code = :size_code)""")
parameterized = raw_select.params(
selection=[3, 10, 18],
size_code='M')
result = engine.execute(parameterized).scalar()
print('\nCity codes all match: {}\n'.format(result))

# Expressing this in Core yields unexpected FROM-clause
city_ids = sa.select([City.id]).where(City.size_code == 'M')
check = sa.select([
array([3, 10, 18]).contained_by(sa.func.array(city_ids))])
engine.execute(check).scalar()


if __name__ == '__main__':
main()