Thank you again Niphlod. That seems like a sensible way for me to separate
the two out. I'll give that a try.
On Friday, 9 November 2012 10:31:04 UTC-5, Niphlod wrote:
>
> if you can, use a separate db for the scheduler. SQLite doesn't handle
> well concurrent writes (with default operational capabilities), so having
> the scheduler operating on the same database of your "insertion" of
> messages can lead to locks.
>
> Just do
>
> db = DAL('whatever.db')
> db2 = DAL('whatever_scheduler.db')
>
> db.define_table('messages', .....)
>
> from gluon.scheduler import Scheduler
> mysched = Scheduler(db2)
>
> and to queue tasks you can then use
>
> db2.scheduler_tasks.validate_and_insert(****)
>
> or, with the new API
>
> mysched.queue_task(***)
>
>
>
> On Friday, November 9, 2012 3:56:59 PM UTC+1, Mike Anson wrote:
>>
>> Greetings...
>>
>> I may have a problem with my database (sqlite) locking.
>>
>> Traceback (most recent call last):
>> File "/home/web2py/src/web2py/gluon/scheduler.py", line 218, in executor
>> result = dumps(_function(*args,**vars))
>> File "applications/ircmessage/models/tasks.py", line 57, in
>> send_unsent_messages
>> for row in db(db.messages.status=='unsent').select(db.messages.id,
>> db.messages.message, db.messages.uid):
>> File "/home/web2py/src/web2py/gluon/dal.py", line 8787, in select
>> return adapter.select(self.query,fields,attributes)
>> File "/home/web2py/src/web2py/gluon/dal.py", line 2127, in select
>> return super(SQLiteAdapter, self).select(query, fields, attributes)
>> File "/home/web2py/src/web2py/gluon/dal.py", line 1615, in select
>> return self._select_aux(sql,fields,attributes)
>> File "/home/web2py/src/web2py/gluon/dal.py", line 1580, in _select_aux
>> self.execute(sql)
>> File "/home/web2py/src/web2py/gluon/dal.py", line 1693, in execute
>> return self.log_execute(*a, **b)
>> File "/home/web2py/src/web2py/gluon/dal.py", line 1687, in log_execute
>> ret = self.cursor.execute(*a, **b)
>> OperationalError: database is locked
>>
>> I have a scheduler every minute that get's records (messages) out of the
>> DB with a status of unsent. The messages then get posted to an IRC channel
>> and updated to sent to they are not included the next time the scheduler
>> runs the script.
>>
>> This seems to work just fine.
>>
>> It's when I inject a new message (which by default has a message status
>> set to unsent) via a bash script. The message inserts fine but my script
>> that posts it to the IRC channel doesn't not post anything but simply
>> updates it's status to sent without actually sending it. By sending it I
>> mean post a message over sockets.
>>
>> Here is my model for sending unsent messages:
>> for row in db(db.messages.status=='unsent').select(db.messages.id,
>> db.messages.message, db.messages.uid):
>> message_id = row.id
>> message_message = row.message
>> message_uid = row.uid
>>
>> #socket connection already opened earlier in the script
>> s.send("PRIVMSG %s :%s - %s\r\n" % (channel, message_uid,
>> message_message))
>> print "message %s has been sent" % message_id
>>
>> ## Only seems to print message when a delay is here.
>> time.sleep(5)
>>
>> ## Set message record to sent and update modified field
>> modified_stamp = strftime("%Y-%m-%d %H:%M:%S")
>> db.messages[message_id] = dict(status='sent',
>> modified=modified_stamp)
>>
>> db.commit()
>>
>> Inserting a message via jsonrpc (shown) and cURL (not shown):
>>
>> @service.jsonrpc
>> def savemessage(message, uid):
>> db.messages.insert(message=message, uid=uid)
>> db.commit()
>>
>> message = {"status":"saved"}
>>
>> return message
>>
>>
>> Should I simply switch to a postgresql or mysql database to prevent this
>> locking?
>>
>> Thanks for any help/advice in advance
>>
>
--