[
https://issues.apache.org/jira/browse/SPARK-7393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14532201#comment-14532201
]
Liang Lee commented on SPARK-7393:
----------------------------------
Dear Dennis,
Under a 1-node standalone spark cluster ,with 256 GB memory, 40 Cpu cores and
470GB SSD. We do the following test:
val df = sqlContext.load("hdfs://R1S1:9000/AnnotationInput/DB/SNP.parquet")
df.cache.count
df.where($"CHROM" === "16").where($"POS" ===
"50745926").select($"ID",$"ALT",$"INFO").show
And the query process took 2.219003 s.
Why it does not works fast as your test?
> How to improve Spark SQL performance?
> -------------------------------------
>
> Key: SPARK-7393
> URL: https://issues.apache.org/jira/browse/SPARK-7393
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Reporter: Liang Lee
>
> We want to use Spark SQL in our project ,but we found that the Spark SQL
> performance is not very well as we expected. The detail is as follows:
> 1. We save data as parquet file on HDFS.
> 2.We just select one or several rows from the parquet file using spark SQL.
> 3. When the total record number is 61 million, it needs about 3 seconds to
> get the result, which is unacceptable long for our scenario.
> 4.When the total record number is 2 million, it needs about 93 ms to get the
> result, whcih is still a little long for us.
> 5. The query statement is like : SELECT * FROM DBA WHERE COLA=? AND COLB=?
> And the table is not complex, which has less 10 columns and the content for
> each column is less than 100 bytes.
> 6. Does any one know how to improve the performance or give some other ideas?
> 7. Can Spark SQL support micro-second-level response?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]