So I have been using SQL alchemy to convert some unidata data stores into
ms sql data.
One of the GL components in our system is called object, well object code.
Most refer to it as object so when I defined my model for the table
including it I named it object.
It all works fine, but object
Thank you everyone for the responses.
I think it is not as big a deal as I might have been expecting,. The
purist in me does not like redefining object. But from an api perspective
there is not cleaner and purpose built name for the field in question.
Ken
--
SQLAlchemy -
The Python SQL
yeah but I have it in the model as
class gltable(Base):
...
object = column(string(6))
On Wed, Aug 9, 2017 at 6:33 PM, Mike Bayer <mike...@zzzcomputing.com> wrote:
> On Wed, Aug 9, 2017 at 2:12 PM, Ken MacKenzie <deviloc...@gmail.com>
> wrote:
> > So I have been usin
table.c.pk1 == sa.bindparam('pk1'),
> table.c.pk2 == sa.bindparam('pk2'),
> )
> statement = sa.delete(table, whereclause=condition)
> batchparams = [{'pk1': v[0], 'pk2': v[1]} for v in id_batch]
> session.execute(statement, batchparams)
>
> Simon
>
> On
Sorted:
ncalls tottime percall cumtime percall filename:lineno(function)
16 1053.794 65.862 1053.794 65.862 {method 'execute' of
'pyodbc.Cursor' objects}
348916/3483110.5390.0000.5560.000 {built-in method
isinstance}
160920/1609180.3130.0000.313
Sorted Textual SQL profile
ncalls tottime percall cumtime percall filename:lineno(function)
161.8400.1151.8400.115 {method 'execute' of
'pyodbc.Cursor' objects}
95720.1960.0000.7880.000 elements.py:1860(_construct)
577000.1890.000
16 SQL statements that take a very different
> amount of time depending on format. Can you please paste the output
> with echo=True on create_engine() so that the difference in SQL
> statements can be seen?
>
> On Wed, Aug 30, 2017 at 12:30 PM, Ken MacKenzie <devil...@gmail.com
> &
I tried an alternate version:
@classmethod
def dict_delete2(cls, dict_in):
"""
Delete records from the database
based on a dictionary keyed by
PK tuple
"""
s = dbc.get_session()
keys_in = ["-".join(k) for k in dict_in.keys()]
I have a query I have constructed and I had to deal with a composite
primary key to select items;
q = s.query(cls)
or_cond = []
for x in id_batch:
pkf = [text(f + "=='" + v + "'") for f,v in zip(cls.SQL_PK,
x)]
and_cond =
different.
On Wednesday, August 30, 2017 at 4:41:38 PM UTC-4, Ken MacKenzie wrote:
>
> I am noticing the 16 execute calls. There should only be 10 with a batch
> size of 1000 and a record size of 9,000 and some change.
>
> Are you just wanting to see the sql via a print(q)?
&
With a batch size of 1 based on the original selection, no improvement.
I am going to see what sql this is creating.
2328928 function calls (2085124 primitive calls) in 1084.925
seconds
Ordered by: internal time
ncalls tottime percall cumtime percall
, August 30, 2017 at 4:43:01 AM UTC-4, Simon King wrote:
>
> On Tue, Aug 29, 2017 at 9:49 PM, Ken MacKenzie <devil...@gmail.com
> > wrote:
> > I have a query I have constructed and I had to deal with a composite
> primary
> > key to select items;
> &
t; You could try playing with your batch sizes to see what sort of effect
> that has.
>
> Simon
>
> On Wed, Aug 30, 2017 at 4:01 PM, Ken MacKenzie <devil...@gmail.com
> > wrote:
> > So I implemented this version with one change. I moved this line:
> >
After the current sorted profile finishes I will revert to the textual
version and run a profile on that. I expect another 10-15 minutes for this
to finish right now.
At present the batch size is set to 1000, total record count is just over
9000 in these tests.
The reason for 1000 was at
So here is my solution as of now. As far as I can tell hits the marks of:
1. DB Neutral
2. Performance (9k records deleted in 5 seconds)
3. Generic
(excuse the print statements still there for testing)
Would welcome comments on this proposed solution. I have learned a lot
from the advice in
Ending output from batch size of 10:
501 - 0:24:35.558103
491 - 0:24:37.197976
481 - 0:24:38.909318
471 - 0:24:40.645801
461 - 0:24:41.954807
451 - 0:24:43.628955
441 - 0:24:45.246469
431 - 0:24:46.662228
421 - 0:24:48.192343
411 - 0:24:49.839395
401 - 0:24:51.513340
391 - 0:24:53.153668
381 -
that this can be an issue with pyodbc and that there are engine
settings related to it. Also perhaps I should try using the pymssql driver
to see if that changes anything.
On Thursday, August 31, 2017 at 5:00:47 PM UTC-4, Ken MacKenzie wrote:
>
> So inspecting the elements of the tuple, they are bo
So inspecting the elements of the tuple, they are both str, so hence
unicode.
Are you saying that if I convert those values to bytes it could improve
performance?
> I'd not bother with the literal_binds and just use a literal value:
>
> pkf = [(col == literal_column("'%s'" % v)) for (col,
for this process as
what we are dealing with is a data replication service pulling from a u2
data store to sql server. The delete is part of batch cleanup testing.
On Thursday, August 31, 2017 at 5:13:58 PM UTC-4, Ken MacKenzie wrote:
>
> Tried doing various conversions on the pk values as they
That makes sense, and as much as I would volunteer to try to maintain the
dialect, I can't see justify using work resources (where I have access to
Sybase) to maintain an open source component for a DB we are working to
migrate away from.
Since the metadata is collecting ok, minus some type
Error message:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000',
"[42000] [FreeTDS][SQL Server]Incorrect syntax near
'dbo.APPLICATION_STATUS_DIM.'.\n (102) (SQLExecDirectW)") [SQL: 'SELECT
dbo."APPLICATION_STATUS_DIM"."APPLICATION_STATUS_SKEY" AS
21 matches
Mail list logo