I would like to send a large pandas.DataFrame to a remote server running MS
SQL. I am using pandas-0.20.3, pyODBC-4.0.21 and sqlalchemy-1.1.13.
My first attempt of tackling this problem can be reduced to following code:
import sqlalchemy as sa
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect=%s" % cnxn_str)
data_frame.to_sql(table_name, engine, index=False)
Simple, but very slow... Took about 2.5 minutes to insert 1000 rows.
Using the following code, that does not involve SQLAlchemy, the same task
is performed in less than a second:
import pyodbc as pdb
list_of_tuples = convert_df(data_frame)
connection = pdb.connect(cnxn_str)
cursor = self.connection.cursor()
cursor.fast_executemany = True
cursor.executemany(sql_statement, list_of_tuples)
connection.commit()
cursor.close()
connection.close()
Is there a way to flip the fast_executemany switch on when using SQLAlchemy?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.