When im using pandas with pd.read_sql_query()
with chunksize to minimiza the memory usage there is no difference between 
both runtimes..

table_df = pd.read_sql_query('''select , engine, chunksize = 30000)

for df in table_df:
  print(df)

the runtime is nearly the same like 5 minutes



#print(table_df) result: #generator object SQLDatabase._query_iterator at 
0x0DC69C30>
I dont know if the query will be triggered by using print(table_df) the 
result is generator object SQLDatabase._query_iterator at 0x0DC69C30>

but the runtime is 6 seconds like in the DBUI im using.

I have no clue what to do.

Greetings Manuel

Trainer Go schrieb am Mittwoch, 8. Juni 2022 um 09:27:04 UTC+2:

> thank you Philip,
>
> I will test it today.
>
>
> Greetings Manuel
>
> Philip Semanchuk schrieb am Dienstag, 7. Juni 2022 um 17:13:28 UTC+2:
>
>>
>>
>> > On Jun 7, 2022, at 5:46 AM, Trainer Go <[email protected]> wrote: 
>> > 
>> > Hello guys, 
>> > 
>> > Im executing 2 queries in my python program with sqlalchemy using the 
>> pyodbc driver. 
>> > The database is a Adaptive SQL Anywhere Version 7 32 Bit. 
>> > 
>> > When im executing the queries in a DB UI it takes 5-6 seconds for both 
>> together and when im using the same queries in my python programm it takes 
>> 5-6 minutes instead of 6 seconds. What im doing wrong? Im new at this. 
>>
>> To start, debug one query at a time, not two. 
>>
>> Second, when you test a query in your DB UI, you’re probably already 
>> connected to the database. Your Python program has to make the connection — 
>> that’s an extra step, and it might be slow. If you step through the Python 
>> program in the debugger, you can execute one statement at a time (the 
>> connection and the query) to understand how long each step takes. That will 
>> help to isolate the problem. 
>>
>> Third, keep in mind that receiving results takes time too. If your DB UI 
>> is written in C or some other language that allocates memory very 
>> efficiently, it might be a lot faster than building a Pandas dataframe. 
>>
>> You might want to eliminate Pandas entirely so you don’t have to question 
>> whether or not that’s the source of your slowdown. You could do this 
>> instead - 
>>
>> for row in conn.execute(my_query).fetchall(): 
>> pass 
>>
>> That will force your Python program to iterate over the result set 
>> without being forced to allocate memory for all the results. 
>>
>> Hope this helps 
>> Philip 
>>
>>
>>
>>
>>
>>
>> > 
>> > would the connection string or query help? 
>> > And i only selecting some datas from the db and converting it into two 
>> dataframes so i dont inserting, updating or deleting datas. 
>> > 
>> > I hope somebody can help me. 
>> > 
>> > Best regards Manuel 
>> > 
>> > -- 
>> > SQLAlchemy - 
>> > The Python SQL Toolkit and Object Relational Mapper 
>> > 
>> > http://www.sqlalchemy.org/ 
>> > 
>> > To post example code, please provide an MCVE: Minimal, Complete, and 
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> description. 
>> > --- 
>> > You received this message because you are subscribed to the Google 
>> Groups "sqlalchemy" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to [email protected]. 
>> > To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy/b306121e-913c-4ca5-bc2d-6308d76d1b76n%40googlegroups.com.
>>  
>>
>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/444379c7-81ce-4711-9f98-b6eded34d004n%40googlegroups.com.

Reply via email to