are you using sqlite ? It's possible that the timeout is reached because 
database is locked and the scheduler_run record (that you confirm is 
missing) can't be inserted. Also, just to be sure, did you try raising the 
timeout? 

If I cannot fix it I will have to find a different solution for directory 
> monitoring and not use the scheduler.
>

Can't you use a task with retry_failed ?

Next step is enabling debug logging of the scheduler to see what goes on 
when the task timeouts

On Thursday, December 13, 2012 7:17:29 AM UTC+1, Mike D wrote:
>
> Hello,
>
> I am using the web2py scheduler to run a task to monitor a directory. 
> Here's some code describing what I'm doing:
>
> def get_configured_logger():
>
>     logger = logging.getLogger("ss_server")
>
>     if (len(logger.handlers) == 0):
>
>         handler = logging.handlers.RotatingFileHandler("/path/to/log/log.
> txt", maxBytes=1024*1024*10, backupCount=2)
>
>         handler.setLevel(logging.DEBUG)
>
>         logger.addHandler(handler)
>
>         logger.setLevel(logging.DEBUG)
>
>         logger.debug("Logger created")
>
>     return logger
>
>
> def write_to_log(s):
>
>     l = get_configured_logger()
>
>     l.info(s)
>
> ...
>
>
> def searchForFiles():
>
>     print("serach for files")
>
>     write_to_log("searching for files")
>
>     print os.getcwd()
>
>     write_to_log("creating watermark")
>
>     watermark_opaque = Image.open('./path/to/image/watermark.png')
>
>     watermark = reduce_opacity(watermark_opaque, 0.7)
>
>     write_to_log("done creating watermark")
>
>     write_to_log("globbing files")
>
>     files = glob.glob(INPUT_DIR + "*.jpg")
>
>     write_to_log("files globbed")
>
>     for filename in files:
>
>         write_to_log("getting basename for " + filename)
>
>         filename = os.path.basename(filename)
>
>         write_to_log("splitting filename")
>
>         parts = filename.split('-')
>
>         write_to_log("checking filename")
>
>         if (len(parts) == 6):
>
>             try:
>
>                 print("processing file: " + filename)
>
>                 write_to_log("processing file: " + filename)
>
>                 im = Image.open(INPUT_DIR + filename)
>
>                 write_to_log("adding watermark")
>
>                 im.paste(watermark,(im.size[0] - watermark.size[0] - 20
> ,im.size[1] - watermark.size[1] - 20),watermark)
>
>                 im.save(INPUT_DIR + filename, "JPEG", quality=100)
>
>                 write_to_log("added watermark")
>
>                 write_to_log("creating scaled images")
>
>                 createScaledImage(64, INPUT_DIR + filename, THUMBS_DIR)
>
>                 createScaledImage(600, INPUT_DIR + filename, SMALL_DIR)
>
>                 write_to_log("done creating scaled images")
>
>                 pic_id = processFile(filename)
>
>                 print("processed file successfully")
>
>                 write_to_log("processed file successfully")
>
>                 write_to_log("renaming files")
>
>                 shutil.move(INPUT_DIR + filename, PROCESSED_DIR + "%d" % 
> pic_id + ".jpg")
>
>                 shutil.move(THUMBS_DIR + filename, THUMBS_DIR + "%d" % 
> pic_id + ".jpg")
>
>                 shutil.move(SMALL_DIR + filename, SMALL_DIR + "%d" % 
> pic_id + ".jpg")
>
>                 write_to_log("done renaming files")
>
>             except IOError:
>
>                 #this is likely due to a partial file, so let it finish 
> writing and try
>
>                 #again next time
>
>                 print("IO Error")
>
>                 write_to_log("IO Error")
>
>                 pass
>
>             except:
>
>                 etype, eval, etb = sys.exc_info()
>
>                 print("error processing file: ", eval.message)
>
>                 write_to_log("error: " + eval.message)
>
>                 shutil.move(INPUT_DIR + filename, ERRORS_DIR + filename)
>
>             write_to_log("end of loop")
>
>                     
>
>     write_to_log("ending function")
>
>     return "success"
>
>                 
>
> myscheduler = Scheduler(db, tasks=dict(searchForFiles=searchForFiles))
>
>
> I have a task inserted into my scheduler_task table with the following 
> properties:
> function name: searchForFiles
> repeats: 0
> timeout: 240
> sync output: 15
>
> Everything works fine except I get random TIMEOUT runs 1-2 times per day. 
> Furthermore these runs will happen in the middle of the night when nothing 
> has been added to the directory. When a TIMEOUT happens, the last line in 
> my log file is "ending function". Also, there is no output at all in the 
> scheduler_run record for the run that was marked as TIMEOUT. 
>
> For these reasons I do not think the timeout is happening inside my code, 
> I believe it is somehow happening between when the scheduler tries to pick 
> up task and when it calls my function. I could totally be wrong though. The 
> reason I have added in such granular logging is to find exactly where the 
> timeout was occurring so I could fix it but I cannot find the problem 
> anywhere.
>
> Please let me know if anyone has any ideas on what could be causing this 
> issue. Any help would be very much appreciated. If I cannot fix it I will 
> have to find a different solution for directory monitoring and not use the 
> scheduler.
>
> Thanks again,
> Mike
>

-- 



Reply via email to