Hi all, Do you know if there is an option to specify how many replicas we want while caching in memory a table in SparkSQL Thrift server? I have not seen any option so far but I assumed there is an option as you can see in the Storage section of the UI that there is 1 x replica of your Dataframe/Table...
I believe there can be a good use case on where you want to replicate a dimension table across your nodes to improve response times when running typical BI DWH types of queries (Just to avoid having to broadcast data every time and again). Do you think that would be a good addition to SparkSQL? Regards.