I'm guessing you mean 512MB of RAM, not KB? Otherwise, you are definitely
going to have problems :)

Regarding conserving disk space - I think only allowing for 1 GB free space
is probably going to run into issues. I think you would be better off
having fewer droplets with more space if that's possible. And only leaving
5% disk space for compaction and as a buffer to avoid running out of disk
is probably not enough.

By default, geode will compact oplogs when they get to be 50% garbage,
which means needing maybe 2X the amount of actual disk space. You can
configure the compaction-threshold to something like 95%, but that means
geode will be doing a lot of extra work clean up garbage on disk.
Regardless, you'll probably want to tune down the max-oplog-size to
something much smaller than 1GB.

-Dan

On Tue, Apr 19, 2016 at 4:26 PM, Eugene Strokin <[email protected]> wrote:

> Hello, I'm seriously consider to use Geode as a core for distributed file
> cache system. But I have a few questions.
> But first, this is what needs to be done: Scalable file system with LRU
> eviction policy utilizing the disc space as much as possible. The idea is
> to have around 50 small Droplets from DigitalOcean, which provides 512Kb
> RAM and 20Gb Storage. The client should call the cluster and get a byte
> array by a key. If needed, the cluster should be expanded. The origin of
> the byte arrays are files from AWS S3.
> Looks like everything could be done using Geode, but:
> - it looks like the compaction requires a lot of free hard drive space.
> All I can allow is about 1Gb. Would this work in my case? How could it be
> done.
> - Is the objects would be evicted automatically from overflow storage
> using LRU policy?
>
> Thanks in advance for your answers, ideas, suggestions.
> Eugene
>

Reply via email to