It's unfortunate that the Blob Store API wasn't named "Small File Store
API" to convey to users its intended purpose.

That said, maybe you could use the same technique as is recommended for
large synonym files: Break them into a sequence of smaller files and then
take advantage of the fact that the synonym file parameter allows a
comma-separated list of file names to be specified. Clearly that wouldn't
work if you had to specify 2,000 files to get your 4GB in 2MB increments,
but maybe you could name the files with a trailing sequence number and then
use a wildcard to specify the common prefix for the files.


-- Jack Krupansky

On Tue, Oct 20, 2015 at 8:19 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> No, the maximum size is limited to 2MB for now. The use-case behind
> the blob store is to store small jars (custom plugins) and stopwords,
> synonyms etc (even though those aren't usable right now) so maybe we
> can relax the limits a little bit. However, it is definitely not meant
> for GBs of data.
>
> On Tue, Oct 20, 2015 at 5:26 PM, Upayavira <u...@odoko.co.uk> wrote:
> > Is there a maximum size to objects in the blob store? How are objects
> > stored? As a stored field?
> >
> > I've got some machine learning models that are 2-4Gb in size, and whilst
> > machine learning models is one of the intended uses of the blob store,
> > putting GB of data in it scares me a little. Is it reasonable and does
> > it work?
> >
> > Upayavira
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>

Reply via email to