[ https://issues.apache.org/jira/browse/SPARK-2459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Armbrust updated SPARK-2459: ------------------------------------ Assignee: Cheng Lian > the user should be able to configure the resources used by JDBC server > ---------------------------------------------------------------------- > > Key: SPARK-2459 > URL: https://issues.apache.org/jira/browse/SPARK-2459 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 1.0.1 > Reporter: Nan Zhu > Assignee: Cheng Lian > > I'm trying the jdbc server > I found that the jdbc server always occupies all cores in the cluster > the reason is that when creating HiveContext, it doesn't set anything related > to spark.cores.max or spark.executor.memory > SparkSQLEnv.scala(https://github.com/apache/spark/blob/8032fe2fae3ac40a02c6018c52e76584a14b3438/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala) > L41-L43 > [~liancheng] -- This message was sent by Atlassian JIRA (v6.2#6252)