> > Any initial proposal or design about the caching to Tachyon that you > can share so far?
Caching parquet files in tachyon with saveAsParquetFile and then reading them with parquetFile should already work. You can use SQL on these tables by using registerTempTable. Some of the general parquet work that we have been doing includes: #1935 <https://github.com/apache/spark/pull/1935>, SPARK-2721 <https://issues.apache.org/jira/browse/SPARK-2721>, SPARK-3036 <https://issues.apache.org/jira/browse/SPARK-3036>, SPARK-3037 <https://issues.apache.org/jira/browse/SPARK-3037> and #1819 <https://github.com/apache/spark/pull/1819> The reason I'm asking about the columnar compressed format is that > there are some problems for which Parquet is not practical. Can you elaborate?