Hello,

I'm not entirely sure how Amazon S3 affects your situation, but #2070
is the only option I know of to chunk your files (i.e. to avoid
loading the entire contents into memory).
If you're using #2070, then the files that were successfully streamed
will have a ['tmpfile'] attribute, which is the file.

Therefore you can do something of this sort:

    chunk_size = 65535

    if 'tmpfile' in request.FILES['file']:
        file = request.FILES['file']['tmpfile']
    else:
        file = StringIO.StringIO(request.FILES['file']['content'])

    chunk = file.read(chunk_size)
    while chunk:
        # Do something. (Like send to amazon?)
        chunk = file.read(chunk_size)


-Mike

On Aug 14, 11:15 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> Hi there!
>
> I'm using Amazon S3 for file storage, so I have to access the FILES-
> object directly in my view. So #2070 won't have any effect, as far as
> I can see.
>
> I've been thinking about the FileWrapper-object.
> If I access it like this: the_file =
> FileWrapper(file(StringIO(request.FILES['file']['content'])))
> would that load the whole thing into memory? Is there a way around
> this?
>
> If there isn't, what are my options?
>
> Thanks!


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to