[
https://issues.apache.org/jira/browse/SPARK-2459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14061981#comment-14061981
]
Nan Zhu commented on SPARK-2459:
--------------------------------
yeah, spark-submit may solve the problem...
then we need to modify the start-thriftserver.sh
(https://github.com/apache/spark/commit/8032fe2fae3ac40a02c6018c52e76584a14b3438#diff-acab5881e22c8120bd801f4cbdee33cdR24)
to call spark-submit instead of spark-class directly
> the user should be able to configure the resources used by JDBC server
> ----------------------------------------------------------------------
>
> Key: SPARK-2459
> URL: https://issues.apache.org/jira/browse/SPARK-2459
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 1.0.1
> Reporter: Nan Zhu
>
> I'm trying the jdbc server
> I found that the jdbc server always occupies all cores in the cluster
> the reason is that when creating HiveContext, it doesn't set anything related
> to spark.cores.max or spark.executor.memory
> SparkSQLEnv.scala(https://github.com/apache/spark/blob/8032fe2fae3ac40a02c6018c52e76584a14b3438/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala)
> L41-L43
> [~liancheng]
--
This message was sent by Atlassian JIRA
(v6.2#6252)