Hi,
It seems that spark-defaults.conf is not read by spark-sql. Is it used only by
spark-shell?
Thanks,
Chirag
Is there a way to get these set by default in spark-sql shell
Thanks,
Chirag
From: Akhil Das ak...@sigmoidanalytics.commailto:ak...@sigmoidanalytics.com
Date: Monday, 29 December 2014 5:53 PM
To: Chirag Aggarwal
chirag.aggar...@guavus.commailto:chirag.aggar...@guavus.com
Cc: user
Hi,
I have a simple app, where I am trying to create a table. I am able to create
the table on running app in yarn-client mode, but not with yarn-cluster mode.
Is this some known issue? Has this already been fixed?
Please note that I am using spark-1.1 over hadoop-2.4.0
App:
-
import
Hi,
There have been some efforts going on in providing column level
encryption/decryption on hive tables.
https://issues.apache.org/jira/browse/HIVE-7934
Is there any plan to extend the functionality over sparksql also?
Thanks,
Chirag
Did https://issues.apache.org/jira/browse/SPARK-3807 fix the issue seen by you?
If yes, then please note that it shall be part of 1.1.1 and 1.2
Chirag
From: Chen Song chen.song...@gmail.commailto:chen.song...@gmail.com
Date: Wednesday, 15 October 2014 4:03 AM
To:
Hi,
Currently the number of shuffle partitions is config driven parameter
(SHUFFLE_PARTITIONS) . This means that anyone who is running a spark-sql query
should first of
all analyze that what value of SHUFFLE_PARTITIONS would give the best
performance for the query.
Shouldn't there be a logic