Hi,

Lets say I want to store 100 million small files (each one about 1k in size) in a HAMMER file system. Files are only written once, then kept unmodified and accessed randomly (older files will be access less often). It is basically a simple file based key/value store, but accessible by multiple processes.

a) What is the overhead in size for HAMMER1? For HAMMER2 I expect each file to take exactly 1k when the file
is below 512 bytes.

b) Can I store all files in one huge directory? Or is it better to fan out the files into several sub-directories?

c) What other issues I should expect to run into? For sure I should enable swapcache :)

I probably should use a "real" database like LMDB, but I like the versatility of files.

Regards,

  Michael

Reply via email to