We are using DIH with SortedMapBackedCache but as data size increases we
need to provide more heap memory to solr JVM.
Can we use multiple CSV file instead of database queries and later data in
CSV files can be joined using zipper? So bottom line is to create CSV files
for each of entity in data-config.xml and join these CSV files using
We also tried EHCache based DIH cache but since EHCache uses MMap IO its
not good to use with MMapDirectoryFactory and causes to exhaust physical
memory on machine.
Please suggest how can we handle use case of importing huge amount of data
into solr.

Sujay P Bawaskar
M:+91-77091 53669

Reply via email to