I'm working on a Pyramid-based website that uses SQLite for its backend (via 
SQLAlchemy).  I want to add a link that will allow an user to download a backup 
of the SQLite database.  Here's what I have so far for the view handler:

>   1 import datetime
>   2 import cStringIO
>   3 import zipfile as Z
>   4 from pyramid.response import Response
>   5 import os
>   6 
>   7 def backup(request):
>   8     conn_str = request.registry.settings['sqlalchemy.url']
>   9     i = conn_str.find('///')
>  10     dbpath = conn_str[i+3:]
>  11     dbname = os.path.basename(dbpath)
>  12     dbhead = os.path.splitext(dbname)[0]
>  13 
>  14     d = datetime.date.today()
>  15     f = file(dbpath,'rb').read()
>  16 
>  17     output = cStringIO.StringIO()
>  18 
>  19     zf = Z.ZipFile(output, 'w', Z.ZIP_DEFLATED)
>  20     zf.writestr(dbname, f)
>  21     zf.close()
>  22 
>  23     o = output.getvalue()
>  24     output.close()
>  25 
>  26     response = Response(
>  27         content_type='application/zip',
>  28         content_disposition='attachment; filename=%s%02d%02d%02d.zip' % 
> (dbhead, d.year % 100, d.month, d.day),
>  29         content_encoding='binary',
>  30         content_length=len(o),
>  31         body=o,
>  32         )
>  33     # response.app_iter = output.getvalue()
>  34 
>  35     return response


The code delivers a zip file containing the current database. It works but I 
have some concerns:

The code in lines 8 and 9 seems like a kludgy way to find the database file 
name from the configuration file. Is there a better way to parse SQLAlchemy 
connection strings? SQLAlchemy has a method for building connection strings 
(sqlalchemy.engine.url.URL) but I couldn't find a parser.

Notice lines 31 (body parameter to Response constructor) and 33 (setting 
response.app_iter member). The code works with either (but obviously not both), 
but when testing with the local server (paster serve), setting the body 
parameter, results in a significantly faster download.  The zipped database 
file is currently about 167k and downloads instantly with the body parameter, 
but takes 4-5 seconds with app_iter. I assume this is because with app_iter, 
the file is downloaded in multiple smaller chunks. Is there a limit on the size 
of the data that can be sent via the body parameter?

Even though I specify a content_length parameter, when the browser is 
downloading the file, it acts like it does not know the total download size. 
The status says x bytes of ??? and the progress bar indicates an indeterminate 
length. I've observed this with both Firefox and Safari. Is there a way to tell 
the browser to total download length so the progress bar fills in properly?

Thanks,
Mark


-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to