On Wednesday, June 17, 2015 at 9:46:25 PM UTC-4, Chris Angelico wrote: > On Thu, Jun 18, 2015 at 10:45 AM, Paul Hubert <[email protected]> wrote: > > On Wednesday, June 17, 2015 at 8:24:17 PM UTC-4, Chris Angelico wrote: > > > >> Are you sure you want iteration and writelines() here? I would be > >> inclined to avoid those for any situation that isn't plain text. If > >> the file isn't too big, I'd just read it all in a single blob and then > >> write it all out at once. > >> > >> ChrisA > > > > Do you think that would fix my issue? Could you give me an example? > > Sorry for the abrupt and terse previous email; I had a student arrive > just as I was posting that, and hit Ctrl-Enter when I should really > have just left the email as a draft. Here's what I'm thinking: > > # Was: > f_in = open(dafile, 'rb') > f_out = gzip.open('/Users/Paul/Desktop/scripts/pic.jpg.gz', 'wb') > f_out.writelines(f_in) > f_out.close() > f_in.close() > > # Now: > gz = '/Users/Paul/Desktop/scripts/pic.jpg.gz' > with open(dafile, 'rb') as f_in, gzip.open(gz, 'wb') as f_out: > f_out.write(f_in.read()) > > You might actually be able to write to a StringIO rather than to a > file, given that you appear to be just reading the data back again > straight away. But in case you want to keep the file around for some > other reason, this still works the exact same way you had it. > > The main difference is that this version swallows the entire file in a > single gulp, then passes it all to the gzip writer. If your file is > tiny (under 1MB), this is perfect. If it's huge (over 1GB), you may > have problems. In between, it'll probably work, but might be > inefficient. > > Hope that helps! > > ChrisA
Same result - server says malformed upload. :/ -- https://mail.python.org/mailman/listinfo/python-list
