Re: org.apache.spark.sql.sources.DDLException: Unsupported dataType: [1.1] failure: ``varchar'' expected but identifier char found in spark-sql

2015-02-17 Thread Yin Huai
Hi Quizhuang,

Right now, char is not supported in DDL. Can you try varchar or string?

Thanks,

Yin

On Mon, Feb 16, 2015 at 10:39 PM, Qiuzhuang Lian 
wrote:

> Hi,
>
> I am not sure this has been reported already or not, I run into this error
> under spark-sql shell as build from newest of spark git trunk,
>
> spark-sql> describe qiuzhuang_hcatlog_import;
> 15/02/17 14:38:36 ERROR SparkSQLDriver: Failed in [describe
> qiuzhuang_hcatlog_import]
> org.apache.spark.sql.sources.DDLException: Unsupported dataType: [1.1]
> failure: ``varchar'' expected but identifier char found
>
> char(32)
> ^
> at org.apache.spark.sql.sources.DDLParser.parseType(ddl.scala:52)
> at
>
> org.apache.spark.sql.hive.MetastoreRelation$SchemaAttribute.toAttribute(HiveMetastoreCatalog.scala:664)
> at
>
> org.apache.spark.sql.hive.MetastoreRelation$$anonfun$23.apply(HiveMetastoreCatalog.scala:674)
> at
>
> org.apache.spark.sql.hive.MetastoreRelation$$anonfun$23.apply(HiveMetastoreCatalog.scala:674)
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at
>
> org.apache.spark.sql.hive.MetastoreRelation.(HiveMetastoreCatalog.scala:674)
> at
>
> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:185)
> at org.apache.spark.sql.hive.HiveContext$$anon$2.org
>
> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:234)
>
> As in hive 0.131, console, this commands works,
>
> hive> describe qiuzhuang_hcatlog_import;
> OK
> id  char(32)
> assistant_novarchar(20)
> assistant_name  varchar(32)
> assistant_type  int
> grade   int
> shop_no varchar(20)
> shop_name   varchar(64)
> organ_novarchar(20)
> organ_name  varchar(20)
> entry_date  string
> education   int
> commission  decimal(8,2)
> tel varchar(20)
> address varchar(100)
> identity_card   varchar(25)
> sex int
> birthdaystring
> employee_type   int
> status  int
> remark  varchar(255)
> create_user_no  varchar(20)
> create_user varchar(32)
> create_time string
> update_user_no  varchar(20)
> update_user varchar(32)
> update_time string
> Time taken: 0.49 seconds, Fetched: 26 row(s)
> hive>
>
>
> Regards,
> Qiuzhuang
>


org.apache.spark.sql.sources.DDLException: Unsupported dataType: [1.1] failure: ``varchar'' expected but identifier char found in spark-sql

2015-02-16 Thread Qiuzhuang Lian
Hi,

I am not sure this has been reported already or not, I run into this error
under spark-sql shell as build from newest of spark git trunk,

spark-sql> describe qiuzhuang_hcatlog_import;
15/02/17 14:38:36 ERROR SparkSQLDriver: Failed in [describe
qiuzhuang_hcatlog_import]
org.apache.spark.sql.sources.DDLException: Unsupported dataType: [1.1]
failure: ``varchar'' expected but identifier char found

char(32)
^
at org.apache.spark.sql.sources.DDLParser.parseType(ddl.scala:52)
at
org.apache.spark.sql.hive.MetastoreRelation$SchemaAttribute.toAttribute(HiveMetastoreCatalog.scala:664)
at
org.apache.spark.sql.hive.MetastoreRelation$$anonfun$23.apply(HiveMetastoreCatalog.scala:674)
at
org.apache.spark.sql.hive.MetastoreRelation$$anonfun$23.apply(HiveMetastoreCatalog.scala:674)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at
org.apache.spark.sql.hive.MetastoreRelation.(HiveMetastoreCatalog.scala:674)
at
org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:185)
at org.apache.spark.sql.hive.HiveContext$$anon$2.org
$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:234)

As in hive 0.131, console, this commands works,

hive> describe qiuzhuang_hcatlog_import;
OK
id  char(32)
assistant_novarchar(20)
assistant_name  varchar(32)
assistant_type  int
grade   int
shop_no varchar(20)
shop_name   varchar(64)
organ_novarchar(20)
organ_name  varchar(20)
entry_date  string
education   int
commission  decimal(8,2)
tel varchar(20)
address varchar(100)
identity_card   varchar(25)
sex int
birthdaystring
employee_type   int
status  int
remark  varchar(255)
create_user_no  varchar(20)
create_user varchar(32)
create_time string
update_user_no  varchar(20)
update_user varchar(32)
update_time string
Time taken: 0.49 seconds, Fetched: 26 row(s)
hive>


Regards,
Qiuzhuang