d =  storage.directories.new(:key => bucket)
d.files.get_https_url("example.txt", 300)

shouldn't fire any requests although that generates a signed url 

The other approaches do seem to validate things like whether the file exists 
and so on.

Fred

On 28 October 2014 at 13:29:31, David (da...@10io.net) wrote:
> Hello!
>  
> I have a small issue with this fantastic gem (great work!).
>  
> Here is a small script to get the public url of an object stored in S3:
>  
> require "fog"
>  
>  
> # Fires one request
> storage = Fog::Storage.new(
> :provider => "AWS",
> :aws_access_key_id => ENV["AWS_KEY"],
> :aws_secret_access_key => ENV["AWS_SECRET"],
> :region => "eu-west-1"
> )
>  
>  
> # Fires one request
> d = storage.directories.get(ENV["AWS_BUCKET"])
>  
>  
> # Fires one request
> d.files.get("A.txt").public_url
>  
> As you can see, this script will fire 3 requests to S3.
>  
> Now, here is the same script but using the AWS sdk:
>  
> require "aws"
>  
>  
> # No request fired
> s3 = AWS::S3.new(
> :access_key_id => ENV['AWS_KEY'],
> :secret_access_key => ENV['AWS_SECRET']
> )
>  
>  
> # No request fired
> b = s3.buckets[ENV["AWS_BUCKET"]]
>  
>  
> # No request fired
> b.objects["A.txt"].public_url.to_s
>  
> There is not a single request fired. I guess that the idea behind this is:
> don't hit S3 until you really, really need to.
>  
> My main issue is the request fired to get the public_url of an object.
> Let me explain it with an example: let's pretend we are building a rails
> API backend for movies. Each movie is linked to a poster image which is
> stored in S3 (as a public read only object).
> Now for the index action, I want the backend to return simply the name of
> the movie and the url of the poster image.
> The issue here, is that the backend will get the Movie objects and then for
> each object it will try to get the public url using the corresponding fog
> object. This will fire a request to S3 for each movie.
> As expected this works well for a small number of Movie objects but not
> with a reasonable large amount of Movie objects (let's say 100 => 100
> requests to S3 have to be made to get the urls).
>  
> The question is therefore: can we avoid this request when calling
> public_url on a Fog::Storage::AWS::File object? I was wondering if it is
> possible with Fog?
>  
> I know, I could build the public url myself without using Fog. I could get
> the url of the bucket with public_url of the Fog::Storage::AWS::Directory
> object and then, build the public url of the object using String
> concatenation/interpolation. The only downside is that, this kind of code
> is coupled with how S3 objects are organised. I'd like to keep the code
> "provider agnostic" as much as possible. If we change from S3 to another
> provider, it's only a matter of storage configuration. That's why we are
> using Fog instead of aws sdk.
>  
> Thanks in advance for any answer.
>  
> Regards,
>  
> David
>  
> --
> You received this message because you are subscribed to the Google Groups 
> "ruby-fog"  
> group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to ruby-fog+unsubscr...@googlegroups.com.  
> For more options, visit https://groups.google.com/d/optout.
>  

-- 
You received this message because you are subscribed to the Google Groups 
"ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ruby-fog+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to