Hi,
I am not sure this has been reported already or not, I run into this error
under spark-sql shell as build from newest of spark git trunk,
spark-sql> describe qiuzhuang_hcatlog_import;
15/02/17 14:38:36 ERROR SparkSQLDriver: Failed in [describe
qiuzhuang_hcatlog_import]
org.apache.spark.sql.so
Reynold and Michael, thank you so much for the quick response.
This problem also happens on branch-1.1, would you mind resolving it on
branch-1.1 also? Thanks again!
From: Reynold Xin [mailto:r...@databricks.com]
Sent: Tuesday, February 17, 2015 3:44 AM
To
I submitted a patch
https://github.com/apache/spark/pull/4628
On Mon, Feb 16, 2015 at 10:59 AM, Michael Armbrust
wrote:
> I was suggesting you mark the variable that is holding the HiveContext
> '@transient' since the scala compiler is not correctly propagating this
> through the tuple extracti
I was suggesting you mark the variable that is holding the HiveContext
'@transient' since the scala compiler is not correctly propagating this
through the tuple extraction. This is only a workaround. We can also
remove the tuple extraction.
On Mon, Feb 16, 2015 at 10:47 AM, Reynold Xin wrote:
Michael - it is already transient. This should probably considered a bug in
the scala compiler, but we can easily work around it by removing the use of
destructuring binding.
On Mon, Feb 16, 2015 at 10:41 AM, Michael Armbrust
wrote:
> I'd suggest marking the HiveContext as @transient since its n
I worked on Pants at Foursquare for a while and when coming up to speed on
Spark was interested in the possibility of building it with Pants,
particularly because allowing developers to share/reuse each others'
compilation artifacts seems like it would be a boon to productivity; that
was/is Pants'
I'd suggest marking the HiveContext as @transient since its not valid to
use it on the slaves anyway.
On Mon, Feb 16, 2015 at 4:27 AM, Haopu Wang wrote:
> When I'm investigating this issue (in the end of this email), I take a
> look at HiveContext's code and find this change
> (https://github.co
Hello,
I am one of the committers for Apache NiFi (incubating). I am looking to
integrate NiFi with Spark streaming. I have created a custom Receiver to
receive data from NiFi. I’ve tested it locally, and things seem to work well.
I feel it would make more sense to have the NiFi Receiver in t
When I'm investigating this issue (in the end of this email), I take a
look at HiveContext's code and find this change
(https://github.com/apache/spark/commit/64945f868443fbc59cb34b34c16d782d
da0fb63d#diff-ff50aea397a607b79df9bec6f2a841db):
- @transient protected[hive] lazy val hiveconf = new
There's no particular reason you have to remove the embedded Jetty
server, right? it doesn't prevent you from using it inside another app
that happens to run in Tomcat. You won't be able to switch it out
without rewriting a fair bit of code, no, but you don't need to.
On Mon, Feb 16, 2015 at 5:08
10 matches
Mail list logo