Hey Boris,

It's a bit hard to follow what you've posted here, but I'm sure that's
because it came from a complicated script.  Here are a few things to
keep in mind.

You've make the connection to be EXCLUSIVE, which means that once you
send the first SQL statement to the database the database is
effectively locked for any other connection regardless of what queries
they might send.  You imply that when you call getDBConnection you
call sqlite.connect().  If self.con already has a connection this
would create a new connection and you would expect your database to be
locked at that point.

So, you could wrap the sqlite.connect() call in a conditional to see
if self.con is already a valid connection or you could call
self.con.commit() and self.con.close() before calling sqlite.connect
again.

Not sure if this helps.  If you have a larger snippet of code you can
send it along.

Daniel

On Sat, Jan 31, 2009 at 11:51 PM, Boris Arloff <boris.arl...@yahoo.com> wrote:
> Daniel,
>
> Apologize for not posting a subject in the original request.
>
> Thanks for offering to help; I do not think is a connection related
> problem, but I could be wrong.  Here are pertinent code segments:
>
> # Method getDBConnection() is called, which performs the following:
> ....
> # create a connection property
> self.con = sqlite.connect( path + file,timeout=self.dbdefs.DB_TIMEOUT,
>                detect_types=sqlite.PARSE_DECLTYPES|sqlite.PARSE_COLNAMES,
>                isolation_level='EXCLUSIVE',check_same_thread=False)
> # config some extensions: row_factory, text_factory
> self.con.row_factory = sqlite.Row    # access by index or col names
> self.con.text_factory = sqlite.OptimizedUnicode    # uncode/bytestr
> #create a general cursor property
> self.cur = self.con.cursor()
> ....
> # Then the following method is called to create the tables if they do not 
> exist:
> # create the database structure if does not exist
> errBool = not self._DBHandler__createDBTables()
> In this case it is a NOOP since the tables do exist; working with an existing 
> sqlite file.
>
> # Then method deleteData(self, **kwargs) is called, which ends up executing 
> the deletes
> # on each table:
> for table in kwargs.keys():
>    ...
>    cmd, errBool = self.__buildDeleteSQL(table, kwargs[table]) # build a 
> delete sql statement
>    ...
>    errBool, err = self.__execSQLCmd(cmd, self.cur)
>
> Method __execSQLCmd(self, cmd, cur), verifies for complete sql statement and:
> ....
>    cur.execute(cmd)                                # execute sql command
> ....
>
> Every DELETE executes correctly.
>
> # Then method insertData(self, **kwargs) is called:
> for table in kwargs.keys():
>    ...
>    for val in kwargs[table].keys():
>        ...
>        row = kwargs[table][val]
>        cmd, errBool = self.__buildInsertSQL(table, row) # build insert sql 
> statements
>        ...
>        errBool, err = self.__execSQLCmd(cmd, self.cur) # same method called 
> above to exec sql
>
> # The cur.execute(cmd) succeeds the insert with the first two tables,
> fails on the third table, only  if
> # the record of the first table is not the
> last record.
>
> # Delete and insert loops finish thru every db table; whenever errBool is 
> true, breaks out of the loop
> # with an exception:
> except err_handler.DBInsertFail:
>   errBool = True
>   errMsg += "\nInsert statement structure:\n" + str(kwargs)
> except:
>  errBool = True                                                    # 
> something else wrong; check args
>  errMsg += "\nGeneral exception at insertData; structure:\n" + str(kwargs)
> ....
> # If error is returned attempts a rollback; else attempts a commit:
> if errBool:
>   try: self.cur.execute('ROLLBACK;')                # rollback on error
>   except: pass
> else:
>  try: self.cur.execute('COMMIT;')
>  except: pass
>
> The same connection object is maintained throughout; it is never closed
> until the program ends.  Again the same code is used for successful and
> failed results as outlined before.
>
> Thanks,
> Boris
>
>
>
>
>
>
> ________________________________
> From: Daniel Watrous <dwmaill...@gmail.com>
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Sent: Saturday, January 31, 2009 5:42:40 PM
> Subject: Re: [sqlite] (no subject)
>
> why don't you send us some code.  It sounds like you might have an
> issue managing your connections.
>
> On Sat, Jan 31, 2009 at 3:09 PM, Boris Arloff <boris.arl...@yahoo.com> wrote:
>> Hi,
>>
>> I am having a small problem with an sqlite3 v3.5.6 database being accessed 
>> from within python 2.5 (import sqlite3 as sqlite).
>>
>> The database has been working correctly with the exception of the following 
>> issue:
>> There are 17 tables with the first table being a dataset index table with 
>> three columns (index, name, datetimestamp).  All other records have various 
>> columns one being a field that stores the index reference to this first 
>> table.  Therefore we can access different datasets.
>>
>> If we create several datasets each with an incremental index (1,2,3, ...n) 
>> in the first table, we can then delete the last dataset n and recreate a new 
>> one with same name and index number; no problem.
>>
>> The problem results when we delete a dataset less than n (last created).  
>> When any dataset 1 thru n-1 is deleted and we attempt to reenter its 
>> information, the following happens:
>>
>> 1. Delete all records from all tables where the dataset id is lets say 3 
>> (with n > 3)
>> 2. Then insert all data related to dataset id 3.
>> 3.. The data set index table entry is successful with index=3, name set to 
>> the original name and new datetimestamp.
>> The second table to be updated gets the its first record inserted correctly, 
>> corresponding to this dataset.
>> The third and subsequent tables however fail to accept the insert and sqlite 
>> declares an "sqlite3.OperationalError" with database is locked.
>>
>> This operation using the last set of data entered does not report this same 
>> error and completes correctly.  For example if we had only three datasets in 
>> the case above, then it would have completed successfully.  As a special 
>> case if we only have one single dataset (i.e. n=1), then we can repeat the 
>> operation successfully until we drop dead.  We can insert all the original 
>> data in every table, delete the data, then recreate the same data, and 
>> repeat.  As soon as a second dataset is created we can repeat the process 
>> with records that pertain to dataset 2 only, any attempts to do the same to 
>> dataset 1 causes the "database is locked" error.
>>
>> Does anyone can help with this?  It seems to me that sqlite corrupts the 
>> index when it is deleted, hence perhaps we cannot reuse the same index 
>> number.
>>
>> Thanks,
>> Boris
>>
>>
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to