Hi,
In Spark 1.1 HiveContext, I ran a create partitioned table command followed by
a cache table command and got a java.sql.SQLSyntaxErrorException: Table/View
'PARTITIONS' does not exist. But cache table worked fine if the table is not a
partitioned table.
Can anybody confirm that cache of
, Du Li wrote:
Hi,
I was loading data into a partitioned table on Spark 1.1.0
beeline-thriftserver. The table has complex data types such as
mapstring,
string and arraymapstring,string. The query is like ³insert
overwrite
table a partition (Š) select Š² and the select clause worked if run
Can anybody confirm whether or not view is currently supported in spark? I
found “create view translate” in the blacklist of HiveCompatibilitySuite.scala
and also the following scenario threw NullPointerException on
beeline/thriftserver (1.1.0). Any plan to support it soon?
create table
...@databricks.com
Date: Sunday, September 28, 2014 at 12:13 PM
To: Du Li l...@yahoo-inc.com.invalidmailto:l...@yahoo-inc.com.invalid
Cc: dev@spark.apache.orgmailto:dev@spark.apache.org
dev@spark.apache.orgmailto:dev@spark.apache.org,
u...@spark.apache.orgmailto:u...@spark.apache.org
u
Thanks, Yanbo and Nicholas. Now it makes more sense — query optimization is the
answer. /Du
From: Nicholas Chammas
nicholas.cham...@gmail.commailto:nicholas.cham...@gmail.com
Date: Thursday, September 25, 2014 at 6:43 AM
To: Yanbo Liang yanboha...@gmail.commailto:yanboha...@gmail.com
Cc: Du Li
Hi,
The following query does not work in Shark nor in the new Spark SQLContext or
HiveContext.
SELECT key, value, concat(key, value) as combined from src where combined like
’11%’;
The following tweak of syntax works fine although a bit ugly.
SELECT key, value, concat(key, value) as combined
(./test_data)
val rdd2 = sc.sequenceFile(./test_data, classOf[NullWritable],
classOf[Text])
assert(rdd.first == rdd2.first._2.toString)
}
}
From: Matei Zaharia matei.zaha...@gmail.commailto:matei.zaha...@gmail.com
Date: Monday, September 15, 2014 at 10:52 PM
To: Du Li l...@yahoo
Hi,
I was trying the following on spark-shell (built with apache master and hadoop
2.4.0). Both calling rdd2.collect and calling rdd3.collect threw
java.io.NotSerializableException: org.apache.hadoop.io.NullWritable.
I got the same problem in similar code of my app which uses the newly