Below is the traceback. When it attempts to perform the conn.execute(), 
which is the db insert is where it seems to realize the connection is 
stale. 

Traceback (most recent call last):
  File "myscript.py", line 126, in <module>
    main()
  File "myscript.py", line 33, in main
    insert_data(final_data, table_name)
  File "myscript.py", line 122, in insert_data
    conn.execute(ins, final_data)
  File "/usr/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 
824, in execute
    return Connection.executors[c](self, object, multiparams, params)
  File "/usr/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 
874, in _execute_clauseelement
    return self.__execute_context(context)
  File "/usr/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 
894, in __execute_context
    self._cursor_executemany(context.cursor, context.statement, 
context.parameters, context=context)
  File "/usr/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 
960, in _cursor_executemany
    self._handle_dbapi_exception(e, statement, parameters, cursor, context)
  File "/usr/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 
931, in _handle_dbapi_exception
    raise exc.DBAPIError.instance(statement, parameters, e, 
connection_invalidated=is_disconnect)
sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server 
has gone away') u'INSERT INTO.......

On Monday, April 1, 2013 7:04:48 PM UTC-7, [email protected] wrote:
>
> I am using the sqlalchemy package in python. I have an operation that 
> takes some time to execute after I perform an autoload on an existing 
> table. This causes the following error when I attempt to use the connection:
>
> sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has 
> gone away')
>
> I have a simple utility function that performs an insert many:
>
> def insert_data(data_2_insert, table_name):
>     engine = create_engine('mysql://blah:blah123@localhost/dbname')
>     # Metadata is a Table catalog. 
>     metadata = MetaData()
>     table = Table(table_name, metadata, autoload=True, autoload_with=engine)
>     for c in mytable.c:
>         print c
>     column_names = tuple(c.name for c in mytable.c)
>     final_data = [dict(zip(column_names, x)) for x in data_2_insert]
>     ins = mytable.insert()
>     conn = engine.connect()
>     conn.execute(ins, final_data)
>     conn.close()
>
>
> It is the following line that times long time to execute since 
> 'data_2_insert' has 677,161 rows. 
>
> final_data = [dict(zip(column_names, x)) for x in data_2_insert]
>
> I came across the following post below which refers to a similar problem, 
> however I am not sure how to implement the connection management suggested 
> as I am quite a newbie. 
>
> http://stackoverflow.com/questions/3033234/handle-mysql-restart-in-sqlalchemy
>
> Note for SQLAlchemy 0.7 - PoolListener is deprecated, but The same solution 
> can be implemented using the new event system. – robots.jpg
>
> Here is the link to the new event system described by one of the users:
>
> http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic
>
> If someone can please show me a couple of pointers on how I could go about 
> integrating the suggestions into the way I use sqlalchemy I would be very 
> appreciative. Thank you. 
>
>
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to