An error occured when writing tables to avro files

2015-05-27 Thread 朱 偉民
hi I create an avro format table follow the wiki https://cwiki.apache.org/confluence/display/Hive/AvroSerDe#AvroSerDe-Hive0.14andlater An error occured when insert data from another table that created by previous steps. I am using hive-0.14.0/hive-1.2.0 + hadoop-2.6.0. Do you have any idea?

cast column float

2015-05-27 Thread patcharee
Hi, I queried a table based on value of two float columns select count(*) from u where xlong_u = 7.1578474 and xlat_u = 55.192524; select count(*) from u where xlong_u = cast(7.1578474 as float) and xlat_u = cast(55.192524 as float); Both query returned 0 records, even though there are some

Re: cast column float

2015-05-27 Thread Bhagwan S. Soni
could you also provide some sample dataset for these two columns? On Wed, May 27, 2015 at 7:17 PM, patcharee patcharee.thong...@uni.no wrote: Hi, I queried a table based on value of two float columns select count(*) from u where xlong_u = 7.1578474 and xlat_u = 55.192524; select count(*)

only timestamp column value of previous row gets reset

2015-05-27 Thread Ujjwal
Hi, I want to cross check a scenario with you and make sure its not a problem on my end. I am trying do to HCatalog read on an edge node and I am seeing a strange behavior with timestamp data type. My hive version is hive 0.13.0.2 First, this is the way documentation suggests the reading

Re: Pointing SparkSQL to existing Hive Metadata with data file locations in HDFS

2015-05-27 Thread Xuefu Zhang
I'm afraid you're at the wrong community. You might have a better chance to get an answer in Spark community. Thanks, Xuefu On Wed, May 27, 2015 at 5:44 PM, Sanjay Subramanian sanjaysubraman...@yahoo.com wrote: hey guys On the Hive/Hadoop ecosystem we have using Cloudera distribution CDH

Pointing SparkSQL to existing Hive Metadata with data file locations in HDFS

2015-05-27 Thread Sanjay Subramanian
hey guys On the Hive/Hadoop ecosystem we have using Cloudera distribution CDH 5.2.x , there are about 300+ hive tables.The data is stored an text (moving slowly to Parquet) on HDFS.I want to use SparkSQL and point to the Hive metadata and be able to define JOINS etc using a programming