On Thu, 28 Feb 2019, at 13:19, Robert Newson wrote
> Thanks to you both, and I agree. 
> 
> Adam's "I would like to see a basic “native” attachment provider with 
> the limitations described in 2), as well as an “object store” provider 
> targeting the S3 API." is my position/preference too. 

ditto. node local storage works for me, this is the single node case which is 
important to have not just for devs but any small environment.

there is a plethora of clustered file systems waiting to eat your data if 1 
node isnt enough, and while i dont enjoy the s3 api it is widespread with many 
options for using and self hosting.

range queries are useful to me (thanks Bob!) but if its a deal killer I'd  find 
a workaround, probably a proxy http server.

random thought - if we stored in fdb a url as a pointer then whipping up a 
generic proxy would be reasonably easy and could deal with file system and s3 
alike:

file:///usr/local/filesystem
https://my.cdn.com/
s3://aws.clone.com/

this would need to have suitable credentials in couch and some way of knowing 
which credentials go with which db or remote..o0O

in terms of storing full content say 100s of mb in fdb as Bob's outlined, is 
the main concern handling potentially failed  partial  uploads? like our 
current b tree lost space? or are there other issues as well?

if so, one could imagine using a temporary key while receiving chunks, and only 
on completion moving those into the correct store?

A+
Dave

Reply via email to