It seems like FileField should delegate some of these methods to an 
underlying Storage backend, no? I don't know what the implications to 
back-compat would be, but the idea seems like a sensible one to start with. 
The storage backend API may need to grow some additional methods to 
verify/validate paths and filenames or it might already have the correct 
methods needed for FileField to work. Fields should do all of their 
path/storage IO via their storage object though.


On Thursday, 17 March 2016 12:16:00 UTC+11, Cristiano Coelho wrote:
>
> To add a bit more about this, it seems that FileField is really meant to 
> be working with an OS file system, making it harder to use a custom Storage 
> that sends data to somewhere like AWS S3 where basically everything is a 
> file (there are no real folders, just key prefixes)
>
> These 3 functions inside FileField are the culprits:
>
> def get_directory_name(self):
>         return 
> os.path.normpath(force_text(datetime.datetime.now().strftime(force_str(self.upload_to))))
>
>     def get_filename(self, filename):
>         return 
> os.path.normpath(self.storage.get_valid_name(os.path.basename(filename)))
>
>     def generate_filename(self, instance, filename):
>         # If upload_to is a callable, make sure that the path it returns is
>         # passed through get_valid_name() of the underlying storage.
>         if callable(self.upload_to):
>             directory_name, filename = os.path.split(self.upload_to(instance, 
> filename))
>             filename = self.storage.get_valid_name(filename)
>             return os.path.normpath(os.path.join(directory_name, filename))
>
>         return os.path.join(self.get_directory_name(), 
> self.get_filename(filename))
>
>
>
> They basically destroy any file name you give to it even with upload_to. 
> This is not an issue on a storage that uses the underlying file system, but 
> it might be quite an issue on different systems, in particular if file 
> names are using slashes as prefixes.
>
> So what I did was to override it a bit:
>
> class S3FileField(FileField):
>  
>     def generate_filename(self, instance, filename):
>         # If upload_to is a callable, make sure that the path it returns is
>         # passed through get_valid_name() of the underlying storage.
>         if callable(self.upload_to):
>             filename = self.upload_to(instance, filename)            
>             filename = self.storage.get_valid_name(filename)
>             return filename
>
>         return self.storage.get_valid_name(filename)
>
>
> And all S3 issues gone! I wonder if this is the best way to do it. It 
> would be great to have an additional keyword argument or something on the 
> File (and image) fields to let the above functions know that they should 
> not perform any OS operation on paths but seems like it would cause a lot 
> of trouble.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/b1cbecc8-3cd0-455f-84bc-a87f079a418b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to