Thanks Michael for the answer. I will watch the project and hope the update 
will be coming soon, :-)




At 2015-08-14 02:13:32, "Michael Armbrust" <mich...@databricks.com> wrote:

Hey sorry, I've been doing a bunch of refactoring on this project.  Most of the 
data generation was a huge hack (it was done before we supported partitioning 
natively) and used some private APIs that don't exist anymore.  As a result, 
while doing the regression tests for 1.5 I deleted a bunch of broken code.  
Those file should be deleted to.


The code does work with Spark 1.4/1.5, but at least as of today mostly requires 
that you have already created the data/tables. I'll work on updating the README 
as the QA period moves forward.


On Thu, Aug 13, 2015 at 6:49 AM, Todd <bit1...@163.com> wrote:

Hi,
I got a question about the spark-sql-perf project by Databricks at 
https://github.com/databricks/spark-sql-perf/

The Tables.scala 
(https://github.com/databricks/spark-sql-perf/blob/master/src/main/scala/com/databricks/spark/sql/perf/bigdata/Tables.scala)
 and BigData 
(https://github.com/databricks/spark-sql-perf/blob/master/src/main/scala/com/databricks/spark/sql/perf/bigdata/BigData.scala)
 are  empty files.
Is this by intention or this is a bug.
Also,the code snippet as follows in the README.MD won't compile  as there is no 
Tables class defined in the org.apache.spark.sql.parquet package:
(I am using Spark1.4.1, is the code compatible with Spark 1.4.1?)

import org.apache.spark.sql.parquet.Tables
// Tables in TPC-DS benchmark used by experiments.
val tables = Tables(sqlContext)
// Setup TPC-DS experiment
val tpcds = new TPCDS (sqlContext = sqlContext)





Reply via email to