Thanks for the replies. I have a few follow up questions.
> Yes, as long as you have Hadoop-compliant implementation of S3 file system > (e.g. org.apache.hadoop.fs.s3.S3FileSystem). I will spend sometime in understanding what this means but by "Hadoop compliant implementation" are you hinting that HDFS needs to be running even if I have S3 as the secondary file system? > You can configure evictions from data cache. Please refer to > org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy class. I think my question got misunderstood. I wanted to know if IGFS can overflow to local disk whenever data does not fit in-memory? > Underlying file system must be shared between all nodes in cluster. If it > is true, then you can use > org.apache.ignite.igfs.secondary.local.LocalIgfsSecondaryFileSystem Ok. I have misunderstood the local disk capability that was added as part of https://issues.apache.org/jira/browse/IGNITE-1926. I understood that IGFS could be backed up by local disk stores where each IFGS node would save the data loaded in-memory on that node to the disk the server has. Could you please elaborate on the shared disk implementation? Thanks. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/IGFS-Questions-tp10217p10289.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
